Human-Computer Interaction最新文献

筛选
英文 中文
Commentary: Societal Reactions to Hopes and Threats of Autonomous Agent Actions: Reflections about Public Opinion and Technology Implementations 评论:社会对自主代理行动的希望和威胁的反应:关于公众舆论和技术实现的思考
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-24 DOI: 10.1080/07370024.2021.1976642
Kimon Kieslich
{"title":"Commentary: Societal Reactions to Hopes and Threats of Autonomous Agent Actions: Reflections about Public Opinion and Technology Implementations","authors":"Kimon Kieslich","doi":"10.1080/07370024.2021.1976642","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976642","url":null,"abstract":"In the paper Avoiding Adverse Autonomous Agent Actions , Hancock (2021) sketches the technolo-gical development of automomous agents leading to a point in the (near) future, where machines become truly independent agents. He further elaborates that this development comes with both great promises, but also serious, even existential threats. Hancock concludes with highlighting the importance to prepare against problematic actions that autonomous agents might enact and suggests measures for humanity to take. and when intelligence will exceed human intelligence. Instead, I will reflect on the societal challenges outlined in Hancock’s article. More specifically, I will address the role of public opinion as a factor in the implementation of autonomous agents into society. Thereby, public perception potential strengths and opportunites may lead to exaggerated expectations, while public perception of potential weaknesses and threats may lead to overexceeded","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"385 1","pages":"259 - 262"},"PeriodicalIF":5.3,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77682608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automation and redistribution of work: the impact of social distancing on live TV production 自动化和工作再分配:社交距离对电视直播制作的影响
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-23 DOI: 10.1080/07370024.2021.1984917
Pavel Okopnyi, Frode Guribye, V. Caruso, O. Juhlin
{"title":"Automation and redistribution of work: the impact of social distancing on live TV production","authors":"Pavel Okopnyi, Frode Guribye, V. Caruso, O. Juhlin","doi":"10.1080/07370024.2021.1984917","DOIUrl":"https://doi.org/10.1080/07370024.2021.1984917","url":null,"abstract":"ABSTRACT The TV industry has long been under pressure to adapt its workflows to use advanced Internet technologies. It also must face competition from social media, video blogs, and livestreaming platforms, which are enabled by lightweight production tools and new distribution channels. The social-distancing regulations introduced due to the COVID-19 pandemic added to the list of challenging adaptations. One of the remaining bastions of legacy TV production is the live broadcast of sporting events and news. These production practices rely on tight collaboration in small spaces, such as control rooms and outside broadcast vans. This paper focuses on current socio-technical changes, especially those changes and adaptations in collaborative practices and workflows in TV production. Some changes necessary during the pandemic may be imposed, temporary adjustments to the ongoing situation, but some might induce permanent changes in key work practices in TV production. Further, these imposed changes are aligned with already ongoing changes in the industry, which are now being accelerated. We characterize the changes along two main dimensions: redistribution of work and automation.","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"6 1","pages":"1 - 24"},"PeriodicalIF":5.3,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87617503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Commentary: “Autonomous” agents? What should we worry about? What should we do? 评论:“自主”代理?我们应该担心什么呢?我们该怎么办?
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-23 DOI: 10.1080/07370024.2021.1977129
Loren Terveen
{"title":"Commentary: “Autonomous” agents? What should we worry about? What should we do?","authors":"Loren Terveen","doi":"10.1080/07370024.2021.1977129","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977129","url":null,"abstract":"","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"75 1","pages":"240 - 242"},"PeriodicalIF":5.3,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77207548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commentary: controlling the demon: autonomous agents and the urgent need for controls 评论:控制恶魔:自主代理和控制的迫切需要
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-22 DOI: 10.1080/07370024.2021.1977127
P. Salmon
{"title":"Commentary: controlling the demon: autonomous agents and the urgent need for controls","authors":"P. Salmon","doi":"10.1080/07370024.2021.1977127","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977127","url":null,"abstract":"In “Avoiding adverse autonomous agent actions,” Hancock (This issue) argues that controlling and exploiting autonomous systems represents one of the fundamental challenges of the 21 century. His parting shot is the disquieting and challenging observation that, with autonomous agents, we may be creating a new “peak predator” from which there will be no recovery of human control. The next generation of Artificial Intelligence (AI), Artificial General Intelligence (AGI) could see the idea of a new technological peak predator become reality. AGI will possess the capacity to learn, evolve and modify its functional capabilities and could quickly become intellectually superior to humans (Bostrom, 2014). Though estimates on when AGI will appear vary, the exact time of arrival is perhaps a moot point. What is more important, as Hancock alludes to, is that work is required immediately to ensure that the impact on humanity is positive rather than negative (Salmon et al., 2021). Should we take a reactive approach and only focus our efforts once AGI is created, it will already be too late (Bostrom, 2014). The first AGI system will quickly become uncontrollable.","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"5 1","pages":"246 - 247"},"PeriodicalIF":5.3,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81654628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Commentary: Should humans look forward to autonomous others? 评论:人类应该期待自主的他人吗?
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-17 DOI: 10.1080/07370024.2021.1976639
John M. Carroll
{"title":"Commentary: Should humans look forward to autonomous others?","authors":"John M. Carroll","doi":"10.1080/07370024.2021.1976639","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976639","url":null,"abstract":"Hancock (this issue) identifies potential adverse consequences of emerging autonomous agent technology. He is not talking about Roomba and Waymo, but about future systems that could be more capable, and potentially, far more autonomous. As is often the case, it is difficult to fit a precise timeline to future technologies, and, as Hancock argues, human inability to fathom or even to perceive the timeline for autonomous agents is a specific challenge in this regard (see below). One concern Hancock raises is that humans, in pursuing the development of autonomous agents, are ipso facto ceding control of life on Earth to a new “peak predator.” At least in part, he is led to the predator framing through recognizing that autonomous systems are developing in a context of human conflict, indeed, often as “implements of human conflict.” Hancock argues that the transition to a new peak predator, even if not a Skynet/Terminator catastrophe, is unlikely to be the path “ultimately most conducive to human welfare,” and that, having ceded the peak position, it is doubtful humans could ever again regain control. Hancock also raises a set of concerns regarding the incommensurability of fundamental aspects of experience for human and autonomous agents. For example, humans and emerging autonomous agents could experience time very differently. Hancock criticizes the expression “real time” as indicative of how humans uncritically privilege a conception of time and duration indexed closely to the parameters of human perception and cognition. However, emerging autonomous agents might think through and carry out complex courses of action before humans could notice that anything even happened. Indeed, Hancock notes that the very transition from contemporary AI to truly autonomous agents could occur in what humans will experience as “a single perceptual moment.” Later in his paper, Hancock broadens the incommensurability point: “ . . . there is no necessary reason why autonomous cognition need resemble human cognition in any manner.” These concerns are worth analyzing, criticizing, and planning for now. The peak predator concern is the most vivid and concretely problematic hypothetical in Hancock’s paper. However, to the extent that emerging autonomous agents become autonomous, they would not be mere implements of human conflict. What is their stake in human conflicts? Why would they participate at all? They might be peak predators in the low-level sense of lethal capability, but predators are motivated to be predators for extrinsic reasons. What could those reasons be for autonomous agents? If we grant that they would be driven by logic, we ought to be able to come up with concrete possibilities, future scenarios that we can identify and plan for. There are, most definitely, reasons to question the human project of developing autonomous weapon systems, and, more specifically, of exploring possibilities for extremely powerful and autonomous weapon systems without a deep u","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"37 1","pages":"251 - 253"},"PeriodicalIF":5.3,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74544677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commentary: the intentions of washing machines 评论:洗衣机的意图
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1976640
Richard H. R. Harper
{"title":"Commentary: the intentions of washing machines","authors":"Richard H. R. Harper","doi":"10.1080/07370024.2021.1976640","DOIUrl":"https://doi.org/10.1080/07370024.2021.1976640","url":null,"abstract":"Hancock makes a range of claims but the most important is this: if a machine ‘learns,’ then, eventually, it will become ‘self-aware.’ It is self-awareness, he argues, that will distinguish machines that are merely autonomous (i.e., which work without human intervention, of which there are many) and those which do something else, which become, in the things they do, like us I cannot understand why one would think this move from learning to awareness would happen but Hancock is convinced. One might add that it is not his discipline that leads to this view – there is no human factors research that asserts or demonstrates that self-awareness emerges through learning, for example; or at least as far as I am aware of. Certainly, Hancock does not cite any. On the contrary, it seems that Hancock takes this idea from the AI community, though as it happens it is an argument that coat-tails on similar notions put forward by cognitive scientists. Some philosophers argue the same, too, such as Dennett (For the view from AI and computer science, see Russell, 2019; for the view of cognitive science, see Tallis, 2011; for a review of the philosophy see Harper et al, 2016). Be that as it may, let me focus on this claim and ask what ‘self-awareness’ might mean or how it might be measured. It seems to me that this is a question to do with anthropology. Hence, one way of approaching this is through imagining how people would act when self-awareness is at issue (Pihlström, 2003, pp. 259–286). Or, put another way, one can approach it by asking what someone might mean when they say they are ‘self-aware’? One might ask, too, why would they say it? I think they do so if they are ‘conscious’ of such things as their intentions. ‘I am about to do this’ they say when they are wanting some advice on that course of action. Intentions are a measure of selfawareness. So, is Hancock saying that autonomous machines would be conscious of their intentions and would that mean, too, that they would treat these intentions as accountable matters? Would that mean, say, that washing machines could have intentions of various kinds? And more, would it mean that these emerge from the learning that washine machines engage in? There are a number of thoughts that arise given this anthropological ‘vignette’ of washing machines and their intentions. How would these intentions be shown? Would these machines need to speak? Besides, when would these machines have these intentions? At what point during learning would they arise? After they have been working a while? One might presuppose some answers here – a machine might only ‘speak’ (if that is its mode of accountability) only once it is switched on. Moreover, one imagines a washing machine would not have any intentions when it was being assembled nor would it have any when it was being disassembled either (as it happens, Hancock refers to similar matters when he reminds the reader of one of his many phrases in earlier human factor articles: this t","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"28 1","pages":"248 - 250"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91201087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Anima: a brain–computer interface for peripheral materialization of mindfulness states during mandala coloring 探索动物:曼荼罗上色过程中正念状态外围物化的脑机接口
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1968864
Claudia Daudén Roquet, C. Sas, Dominic Potts
{"title":"Exploring Anima: a brain–computer interface for peripheral materialization of mindfulness states during mandala coloring","authors":"Claudia Daudén Roquet, C. Sas, Dominic Potts","doi":"10.1080/07370024.2021.1968864","DOIUrl":"https://doi.org/10.1080/07370024.2021.1968864","url":null,"abstract":"I could feel my mind buzzing after another long day at work. Driving home, I am looking forward to my “me time” ritual of playing with colors. As I arrive, I get myself comfortable, pick up an orange crayon, and start coloring a mandala with beautiful lace-like details. For that, I have to fully concentrate, and my attention is focused on the unfolding present experience of slowly and mindfully filling in the mandala with color. Once I filled in all the little spaces from the central layer, I pick up a green crayon and color the next layer. When I make mistakes is usually because I am not paying attention. I now tend to accept and work my way around them. Before I know it, my mandala is complete, and my buzzing mind has calmed down. I can even pinpoint some subtle feelings unreachable when I started, wondering also how I could do better next time. By looking at the colored mandala, I can see from my mistakes when I was less mindful and lost focus. I also know that there were other moments of lost focus, albeit I cannot see them in my mandala. Maybe because these happened while coloring larger areas, and then mistakes are easier to avoid even without concentration. This scenario inspired by our study findings illustrates the richness of mandala coloring as an illustration of a focused attention mindfulness (FAM) practice. It shows the importance of intention, attention, and non-judgmental acceptance, with an invitation to explore how the materialization of mindfulness states onto colors may provide value to this practice. While acknowledging the complexity of mindfulness constructs (Hart et al., 2013), for the purpose of our work we adopt the working definition of mindfulness as “the awareness that emerges through paying attention on purpose, in the present moment, and non-judgmentally to the unfolding of experience moment by moment” [pp. 145] (Kabat-Zinn, 2009). Nevertheless, consistent findings in the literature indicate that the skills required to sustain and regulate attention are challenging to develop (Kerr et al., 2013; Sas & Chopra, 2015). Mindfulness practices have been broadly categorized under focused attention – involving sustained attention on an intended object, and open monitoring – with broader attentional focus, hence no explicit object of attention (Lutz et al., 2008). While FAM targets the focus and maintenance of attention by narrowing it to a selected stimulus despite competing others and, when attention is lost, disengaging from these distracting stimuli to redirect it back to the selected one, rather than narrowing it, open monitoring involves broadening the focus of attention through a receptive and non-judgmental stance toward moment-to-moment internal salient stimuli such as difficult thoughts and emotions (Britton, 2018). FAM is typically the starting point for novice meditators, with the main object of attention being either internal (e.g., focus on the breathing in sitting meditation (Prpa et al., 2018; Vidyarthi et al","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"44 1","pages":"259 - 299"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80168910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Existential time and historicity in interaction design 交互设计中的存在时间和历史性
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1912607
F. V. Amstel, R. Gonzatto
{"title":"Existential time and historicity in interaction design","authors":"F. V. Amstel, R. Gonzatto","doi":"10.1080/07370024.2021.1912607","DOIUrl":"https://doi.org/10.1080/07370024.2021.1912607","url":null,"abstract":"Time is considered a defining factor for interaction design (Kolko, 2011; Löwgren, 2002; Malouf, 2007; Mazé, 2007; Smith, 2007), yet little is known about its history in this field. The history of time is non-linear and uneven, understood as part of each society’s cultural development (Friedman, 1990; Souza, 2016). As experienced by humans, time is socially constructed, using the available concepts, measurement devices, and technology in a specific culture. Since each human culture produces its own history, there are also multiple courses of time. The absolute, chronological, and standardized clock time is just one of them, yet one often imposed on other cultures through colonialism, imperialism, globalization, and other international relationships (Nanni, 2017; Rifkin, 2017). Digital technology is vital for this imposition, and interaction design has responsibility for it. As everyday life becomes increasingly mediated by digital technologies, their rhythms (Lefebvre, 2004) are formalized, structured, or replaced by algorithms that structure everyday life rhythms (a.ka. algorhythms) that offer little accountability and local autonomy (Finn, 2019; Firmino et al., 2018; Miyazaki, 2013; Pagallo, 2018). These algo-rhythms enforce absolute time over other courses of time as a means to pour modern values like progress, efficiency, and profit-making. Despite the appearance of universality, these values do have a local origin. They come from developed nations, where modernity and, more recently, neoliberalism were invented and dispatched to the rest of the world – as if they were the only viable modes of collective existence (Berardi, 2017; Harvey, 2007). Interaction design contributes to this dispatch by embedding – and hiding – modern and neoliberal values and modes of existence into digital technology’s temporal form (Bidwell et al., 2013; Lindley, 2015, 2018; Mazé, 2007). In the last 15 years, critical and speculative design research has questioned absolute time in interaction design (Huybrechts et al., 2017; Mazé, 2019; Nooney & Brain, 2019; Prado de O. Martins & Vieira de Oliveira, 2016). This research stream made the case that time can also be designed in relative terms: given a certain present, what are the possible pasts and futures? Looking at alternative futures (Bardzell, 2018; Coulton et al., 2016; Duggan et al., 2017; Linehan et al., 2014; Tanenbaum et al., 2016) or alternatives pasts (Coulton & Lindley, 2017; Eriksson & Pargman, 2018; Huybrechts et al., 2017) enables realizing alternative presents and alternative designs (Auger, 2013; Coulton et al., 2016; Dunne & Raby, 2013). These alternatives often include deviations from the (apparently) inevitable single-story future shaped by digital technologies envisioned by big tech companies. The deviation expands the design space – the scenarios considered in a design project (Van Amstel et al., 2016; Van Amstel & Garde, 2016) – to every kind of social activity, even the noncommercial. Dystopia","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"49 1","pages":"29 - 68"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84841714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Avoiding adverse autonomous agent actions 避免不利的自主代理行为
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-11-16 DOI: 10.1080/07370024.2021.1970556
P. Hancock
{"title":"Avoiding adverse autonomous agent actions","authors":"P. Hancock","doi":"10.1080/07370024.2021.1970556","DOIUrl":"https://doi.org/10.1080/07370024.2021.1970556","url":null,"abstract":"Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with auto","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"29 1","pages":"211 - 236"},"PeriodicalIF":5.3,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81647783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Commentary: extraordinary excitement empowering enhancing everyone 评论:非凡的兴奋赋予每个人力量
IF 5.3 2区 工程技术
Human-Computer Interaction Pub Date : 2021-10-05 DOI: 10.1080/07370024.2021.1977128
B. Shneiderman
{"title":"Commentary: extraordinary excitement empowering enhancing everyone","authors":"B. Shneiderman","doi":"10.1080/07370024.2021.1977128","DOIUrl":"https://doi.org/10.1080/07370024.2021.1977128","url":null,"abstract":"I eagerly support Peter Hancock’s desire to avoid adverse autonomous agent actions, but I think that he should change from his negative and pessimistic view to a more constructive stance about how ...","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"120 1","pages":"243 - 245"},"PeriodicalIF":5.3,"publicationDate":"2021-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86164882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信