Stephen McCarthy, Titiana Ertiö, Ciara Fitzgerald, Nina Kahma
{"title":"Digital Sustainability for Energy-Efficient Behaviours: A User Representation and Touchpoint Model","authors":"Stephen McCarthy, Titiana Ertiö, Ciara Fitzgerald, Nina Kahma","doi":"10.1007/s10796-024-10509-7","DOIUrl":"https://doi.org/10.1007/s10796-024-10509-7","url":null,"abstract":"<p>In response to climate change, nations have been tasked with reducing energy consumption and lessening their carbon footprint through targeted actions. While digital technologies can support this goal, our understanding of energy practices in a private household context remains nascent. This challenge is amplified by the ‘invisible’ nature of users’ interaction with energy systems and the impact of unconscious habits. Our objective is to explore how touchpoints embedded in digital sustainability platforms shape energy-efficiency behaviours among users. Building on data from semi-structured interviews and a two-hour co-creation workshop with 25 energy experts in the ECO2 project, we first identify three user representations of relevance to such platforms: <i>energy-unaware</i>, <i>living in denial</i>, and <i>energy-aware and active</i>. Our findings suggest that ‘static’ user representations (based on user demographics and average consumption) are giving way to socio-cognitive representations that follow users’ journeys in energy efficiency. We then develop a set of design principles to promote sustainable energy behaviours through digital sustainability platforms across <i>user-owned</i>, <i>social/external</i>, <i>brand-owned</i>, and <i>partner-owned</i> touchpoints. An analysis of user feedback from the ECO2 project shows support for our design principles across users’ journeys. Of 62 respondents covering all three representations, 76% of them intended to “implement changes in terms of energy consumption and energy efficiency”.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"26 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141557113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shabnam FakhrHosseini, Chaiwoo Lee, Sheng-Hung Lee, Joseph Coughlin
{"title":"A Taxonomy of Home Automation: Expert Perspectives on the Future of Smarter Homes","authors":"Shabnam FakhrHosseini, Chaiwoo Lee, Sheng-Hung Lee, Joseph Coughlin","doi":"10.1007/s10796-024-10496-9","DOIUrl":"https://doi.org/10.1007/s10796-024-10496-9","url":null,"abstract":"<p>Recent advancements in digital technologies, including artificial intelligence (AI), Internet of Things (IoT), and information and communication technologies (ICT), are transforming homes into interconnected ecosystem of services. Yet, discourse on home technologies remains fragmented due to inconsistent terminologies. This paper addresses the lack of a framework, studying distinctions between smart and non-smart homes and forecasting connectivity and automation growth. Experts (21) participated in online surveys and interviews in 2021, exploring language, structure, and technical/social aspects of basic and smarter homes. Quantitative survey data and qualitative interview analyses yield insights on defining smarter homes, barriers to adoption, and framework improvements to establish universal definitions. This study underscores the urgency of harmonizing language and concepts in the domain of smart homes, revealing user understanding gaps and usability issues as barriers. This bridges gaps for consumer engagement and tech adoption.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"15 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141557111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fighting Fire with Fire: Combating Criminal Abuse of Cryptocurrency with a P2P Mindset","authors":"Klein Galit, Assadi Djamchid, Zwilling Moti","doi":"10.1007/s10796-024-10498-7","DOIUrl":"https://doi.org/10.1007/s10796-024-10498-7","url":null,"abstract":"<p>As part of the P2P sharing economy, cryptocurrencies offer both creative and criminal opportunities. To deal with offenders, solutions such as legislation and regulation are proposed. However, these are foreign to the P2P spirit of trusted interactions and transactions. This paper aims to identify solutions that align with P2P technologies and relationships to combat the criminal use of cryptocurrencies. In line with our research question, we adopt the method of grounded theory. Based on 45 interviews on 1,500 h of podcasts, blogs, and TV shows, we observed how experts in finance, technology, and cryptocurrency analyzed the hazards, as well as, the solutions for cryptocurrencies schemes. The results indicate that this new technology has also engendered new types of criminal schemes; thus, we can categorize malicious behaviors into conventional and P2P hazards. In addition, experts also point to conventional and P2P solutions to crypto-crimes at the individual, organizational, communal, and national levels. As such, they underscore the discrepancies between those who push for solutions favoring conventional <i>regulatory</i> forces versus those advocating for <i>normative legitimacy</i>, hence pulling the industry to preserve the P2P identity. Following institutional theory and the need for legitimacy in this new and disruptive industry, we discuss the tension between agendas and suggest unorthodox solutions for an innovative yet troubled technology.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"38 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141553322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Ingestion Validation Through Stable Conditional Metrics with Ranking and Filtering","authors":"Niels Bylois, Frank Neven, Stijn Vansummeren","doi":"10.1007/s10796-024-10504-y","DOIUrl":"https://doi.org/10.1007/s10796-024-10504-y","url":null,"abstract":"<p>We introduce an advanced method for validating data quality, which is crucial for ensuring reliable analytics insights. Traditional data quality validation relies on data unit tests, which use global metrics to determine if data quality falls within expected ranges. Unfortunately, these existing approaches suffer from two limitations. Firstly, they offer only coarse-grained assessments, missing fine-grained errors. Secondly, they fail to pinpoint the specific data causing test failures. To address these issues, we propose a novel approach using conditional metrics, enabling more detailed analysis than global metrics. Our method involves two stages: unit test discovery and monitoring/error identification. In the discovery phase, we derive conditional metric-based unit tests from historical data, focusing on stability to select appropriate metrics. The monitoring phase involves using these tests for new data batches, with conditional metrics helping us identify potential errors. We validate the effectiveness of this approach using two datasets and seven synthetic error scenarios, showing significant improvements over global metrics and promising results in fine-grained error detection for data ingestion validation.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"22 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Making It Possible for the Auditing of AI: A Systematic Review of AI Audits and AI Auditability","authors":"Yueqi Li, Sanjay Goel","doi":"10.1007/s10796-024-10508-8","DOIUrl":"https://doi.org/10.1007/s10796-024-10508-8","url":null,"abstract":"<p>Artificial intelligence (AI) technologies have become the key driver of innovation in society. However, numerous vulnerabilities of AI systems can lead to negative consequences for society, such as biases encoded in the training data and algorithms and lack of transparency. This calls for AI systems to be audited to ensure that the impact on society is understood and mitigated. To enable AI audits, auditability measures need to be implemented. This study provides a systematic review of academic work and regulatory work on AI audits and AI auditability. Results reveal the current understanding of the AI audit scope, audit challenges, and auditability measures. We identify and categorize AI auditability measures for each audit area and specific process to be audited and the party responsible for each process to be audited. Our findings will guide existing efforts to make AI systems auditable across the lifecycle of AI systems.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"16 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141496024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julius Gonsior, Christian Falkenberg, Silvio Magino, Anja Reusch, Claudio Hartmann, Maik Thiele, Wolfgang Lehner
{"title":"Comparing and Improving Active Learning Uncertainty Measures for Transformer Models by Discarding Outliers","authors":"Julius Gonsior, Christian Falkenberg, Silvio Magino, Anja Reusch, Claudio Hartmann, Maik Thiele, Wolfgang Lehner","doi":"10.1007/s10796-024-10503-z","DOIUrl":"https://doi.org/10.1007/s10796-024-10503-z","url":null,"abstract":"<p>Despite achieving state-of-the-art results in nearly all Natural Language Processing applications, fine-tuning Transformer-encoder based language models still requires a significant amount of labeled data to achieve satisfying work. A well known technique to reduce the amount of human effort in acquiring a labeled dataset is <i>Active Learning</i> (AL): an iterative process in which only the minimal amount of samples is labeled. AL strategies require access to a quantified confidence measure of the model predictions. A common choice is the softmax activation function for the final Neural Network layer. In this paper, we compare eight alternatives on seven datasets and show that the softmax function provides misleading probabilities. Our finding is that most of the methods primarily identify hard-to-learn-from samples (commonly called outliers), resulting in worse than random performance, instead of samples, which actually reduce the uncertainty of the learned language model. As a solution, this paper proposes Uncertainty-Clipping, a heuristic to systematically exclude samples, which results in improvements for most methods compared to the softmax function.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"27 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combating Fake News Using Implementation Intentions","authors":"Inaiya Armeen, Ross Niswanger, Chuan (Annie) Tian","doi":"10.1007/s10796-024-10502-0","DOIUrl":"https://doi.org/10.1007/s10796-024-10502-0","url":null,"abstract":"<p>The rise of misinformation on social media platforms is an extremely worrisome issue and calls for the development of interventions and strategies to combat fake news. This research investigates one potential mechanism that can help mitigate fake news: prompting users to form implementation intentions along with education. Previous research suggests that forming “if – then” plans, otherwise known as implementation intentions, is one of the best ways to facilitate behavior change. To evaluate the effectiveness of such plans, we used MTurk to conduct an experiment where we educated participants on fake news and then asked them to form implementation intentions about performing fact checking before sharing posts on social media. Participants who had received both the implementation intention intervention and the educational intervention significantly engaged more in fact checking behavior than those who did not receive any intervention as well as participants who had received only the educational intervention. This study contributes to the emerging literature on fake news by demonstrating that implementation intentions can be used in interventions to combat fake news.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"17 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141453099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Skyline-based Exploration of Temporal Property Graphs","authors":"Evangelia Tsoukanara, Georgia Koloniari, Evaggelia Pitoura","doi":"10.1007/s10796-024-10505-x","DOIUrl":"https://doi.org/10.1007/s10796-024-10505-x","url":null,"abstract":"<p>In this paper, we focus on temporal property graphs, that is, property graphs whose labeled nodes and edges as well as the values of the properties associated with them may change with time. A key challenge in studying temporal graphs lies in detecting interesting events in their evolution, defined as time intervals of significant stability, growth, or shrinkage. To address this challenge, we build aggregated graphs, where nodes are grouped based on the values of their properties, and seek events at the aggregated level. To locate such events, we propose a novel approach based on <i>unified evolution skylines</i>. A unified evolution skyline assesses the significance of an event in conjunction with the duration of the interval in which the event occurs. Significance is measured by a set of counts, where each count refers to the number of graph elements that remain stable, are created, or deleted, for a specific property value. Lastly, we share experimental findings that highlight the efficiency and effectiveness of our approach.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"19 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141453104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting Shared Sub-Expression and Materialized View Reuse for Multi-Query Optimization","authors":"Bala Gurumurthy, Vasudev Raghavendra Bidarkar, David Broneske, Thilo Pionteck, Gunter Saake","doi":"10.1007/s10796-024-10506-w","DOIUrl":"https://doi.org/10.1007/s10796-024-10506-w","url":null,"abstract":"<p>Querying in isolation lacks the potential of reusing intermediate results, which ends up wasting computational resources. Multi-Query Optimization (MQO) addresses this challenge by devising a shared execution strategy across queries, with two generally used strategies: <i>batched</i> or <i>cached</i>. These strategies are shown to improve performance, but hardly any study explores the combination of both. In this work we explore such a hybrid MQO, combining batching (Shared Sub-Expression) and caching (Materialized View Reuse) techniques. Our hybrid-MQO system merges batched query results as well as caches the intermediate results, thereby any new query is given a path within the previous plan as well as reusing the results. Since caching is a key component for improving performance, we measure the impact of common caching techniques such as FIFO, LRU, MRU and LFU. Our results show LRU to be the optimal for our usecase, which we use in our subsequent evaluations. To study the influence of batching, we vary the factor - <span>derivability</span> - which represents the similarity of the results within a query batch. Similarly, we vary the cache sizes to study the influence of caching. Moreover, we also study the role of different database operators in the performance of our hybrid system. The results suggest that, depending on the individual operators, our hybrid method gains a speed-up between 4x to a slowdown of 2x from using MQO techniques in isolation. Furthermore, our results show that workloads with a generously sized cache that contain similar queries benefit from using our hybrid method, with an observed speed-up of 2x over sequential execution in the best case.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"1 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141448351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Economic Framework for Creating AI-Augmented Solutions Across Countries Over Time","authors":"Jin Sik Kim, Jinsoo Yeo, Hemant Jain","doi":"10.1007/s10796-024-10487-w","DOIUrl":"https://doi.org/10.1007/s10796-024-10487-w","url":null,"abstract":"<p>This paper examines the potential for collaboration between countries with differential resource endowments to advance AI innovation and achieve mutual economic benefits. Our framework juxtaposes economies with a comparative advantage in <i>AI-capital</i> and those with a comparative advantage in <i>tech-labor</i>, analyzing how these endowments can lead to enhanced comparative advantages over time. Through the application of various production functions and the use of Edgeworth boxes, our analysis reveals that strategic collaboration based on comparative advantage can yield Pareto improvements for both developed and developing countries. Nonetheless, this study also discusses the challenges of uneven benefit distribution, particularly the risk of “brain drain” from developing nations. Contributing to the discourse on the economics of AI and international collaboration, this study highlights the importance of thoughtful strategic planning to promote equitable and sustainable AI development worldwide.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"82 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}