{"title":"Understanding Trust in Automation: A Consideration of Human Factors Context","authors":"Hannah Biermann, S. Otten, M. Ziefle","doi":"10.54941/ahfe1003173","DOIUrl":"https://doi.org/10.54941/ahfe1003173","url":null,"abstract":"People interact with modern technology in various areas of life, both private and professional. One major advantage is the relief provided to users by technical devices that progressively take over tasks and perform them autonomously. Increasing system automation is often accompanied by uncertainties among users, expressed in concerns about paternalism through technology and loss of control. Trust can help to overcome perceived uncertainties and is thus a key driver for the acceptance and successful implementation of innovative technology. The study aim is to understand how trust formation “works”, what technology trust means in different contexts, which perceived uncertainties exist, and how individual usage decisions are made.We designed a two-step empirical research approach. Our aim was to explore what trust and distrust mean to different individuals and groups – in general (e.g., interpersonal trust) and in the context of technology (e.g., trust in automation) – and what influence different dimensions of trust and distrust may have on the acceptance of technology in different application fields. In a preliminary qualitative study, guided interviews (language: German) were conducted to identify conceptual facets of trust and distrust. In sum, 21 adults of different age groups (21 to 86 years) participated. The interviews were transcribed verbatim and analyzed using content analysis. Results were operationalized and measured in an online questionnaire survey (data collection is ongoing). To investigate influences of human factors on perceptions of trust and distrust, demographic data, personality traits, and attitudes were measured (e.g., risk perception, trust disposition, openness to innovation). To evaluate how differences in context might influence trust perceptions, the fields of autonomous driving and ambient assisted living were considered.Interviews: Trust was described as a positive gut feeling, difficult to put into words, and associated with affection, love, and friendship. Distrust was described as a negative, acrid feeling and associated with doubt, uncertainty, and deception. Experience (good/bad) and knowledge were identified as key factors that influence (dis)trust in general. Regarding technology, technical reliability and functionality as well as the perception of service operators influence the perceived trustworthiness. Trust and distrust seem to influence behavior in terms of reliability and control and depend on general risk perception and attitude (open/skeptical) toward others and things, respectively.Questionnaire: Factor and regression analyses will be conducted to form distinct dimensions for trust and distrust and to model their influence on technology acceptance in different contexts (mobility vs. health care). Cluster analysis will be conducted to examine the extent to which trust and distrust are perceived the same or differently among diverse study participants. Our results contribute to a deeper understanding o","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123429524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User eXperience Heuristics for Geoportals","authors":"Daniela Quiñones, Andrés Barraza, L. Rojas","doi":"10.54941/ahfe1001689","DOIUrl":"https://doi.org/10.54941/ahfe1001689","url":null,"abstract":"A web portal is defined as a “website that acts as a connection to a collection of information or resources including sources, services, news, tutorials, tools and an organized collection of other references from different websites” (Maguire and Longley, 2005). Specifically, the expression “geospatial resources” is used for geospatial information and services, where this “geoportal” is a type of portal, where it deals with resources with these characteristics. Based on the review that we performed, we can define a geoportal as a website that is considered an entry point to geographic content on the web or, more simply, a website where geographic content can be discovered. Several features can define this type of portals. According to the “Spatial Data Infrastructure of Uruguay” (IdeUy, 2018), geoportals have the following characteristics: User interface; Metadata; Data access; Documentation; Contact; and Help.According to the ISO 9241-11 standard (2018), the User eXperience (UX) is defined as “the perceptions and responses of the person resulting from the use and/or anticipated use of a product, system or service”. The UX evaluation is an important task to perform when developing any kind of websites. It is necessary to assess whether the websites meet the needs of the users and whether it fits properly in the physical, social, and organizational context in which it will be used. In this regard, it is important to evaluate whether a geoportal is intuitive, easy to use, and allows users to complete their objectives. UX should be explicitly considered, since the information, the content, the, presentation and the structure of the website should generate a user-friendly experience, motivating people to use the several functionalities and tools provided by the geoportal.Heuristic evaluation is an inspection method where expert evaluators detect potential usability/UX problems, based on heuristics (Nielsen and Molich, 1990). Nielsen’s heuristics allow evaluators to inspect a website in a general way, without focusing on specific domain features. However, the above makes it difficult to detect usability/UX domain-specific problems. Geoportals have their own features that differentiate them from other products, so it is necessary to use a set of specific and appropriate heuristics for these types of websites. Moreover, we think that heuristics can help detecting problems related to UX, and not limited to usability aspects (Quiñones et al., 2018, Quiñones and Rusu, 2019).This article presents a set of 10 heuristics to evaluate the UX of geoportals. The heuristics were developed using the methodology proposed by Quiñones et al. (2018). The methodology proposes eight stages to develop and validate a new set of heuristics and can be applied iteratively, repeating the stages that are necessary to refine the proposed set. We performed a preliminary validation with UX experts in order to refine and improve the proposed set of UX heuristics for geoportals.","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"779 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117028149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The information architecture of PDAs-its impact on user comprehension and risk perception","authors":"Li-Jen Wang, Meng-Cong Zheng","doi":"10.54941/ahfe1003195","DOIUrl":"https://doi.org/10.54941/ahfe1003195","url":null,"abstract":"Communication technology has been widely used in healthcare. There is a trend for one of the decision-making aids in innovative healthcare to become interactive web pages and applications. However, users often do not understand professional medical information and health values, especially those with poor numeracy skills. Therefore, more research is needed on translating complex medical information into simple guidelines that users can effectively understand and operate.Aims: This study aims to understand the impact of author-driven and reader-driven information architectures on users of patient decision aids.Methods: Author-driven and reader-driven framework models were developed for the same topic of patient decision aids. Users of different numerical abilities were evaluated on their comprehension, risk perception, usability, and task load. A total of 38 participants were assessed, of which 17 were reader-driven models, and 21 were author-driven models.Results: The results showed that reader-driven model participants were significantly better at comprehension than the author-driven model(p=0.025), but there was no significant difference in risk perception. Author-driven model usability was higher, but overall usability was lower. In the task load section, participants demanded more mental, physical, time pressure, effort, and stress feelings in the reader-driven model, especially in the time pressure section (p=0.011). In contrast, participants were more satisfied with their self-operational performance in the author-driven model. There was no significant difference between the two models regarding comprehension and risk perception between participants with excellent and lousy numeracy skills. However, among those with good numeracy skills, reader-driven model comprehension performance was significantly better than the author-driven model (p=0.018), and there was no significant difference in risk perception.Discussion: We found that the information architecture in patient decision aids impacts the user's comprehension. In the future, the information navigation framework of patient decision aids should consider the numerical ability of the user when developing them.","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124110173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Jordan Hinson, Amelia J. Kinsella, Ruth Propper
{"title":"So Much Information, So Little Screen Space: Assessing the usability of hierarchical data visualizations in Tableau","authors":"R. Jordan Hinson, Amelia J. Kinsella, Ruth Propper","doi":"10.54941/ahfe1001722","DOIUrl":"https://doi.org/10.54941/ahfe1001722","url":null,"abstract":"The purpose of this usability study was to determine the most effective of three ways to display hierarchical data using the interactive data visualization software, Tableau. Often, data visualizations contain large amounts of important information that users need to be able to manipulate and interpret. Viewing hierarchical data in an interactive data visualization software like Tableau has the advantage of allowing dynamic selection of the hierarchical level of detail of results displayed. This enables improved understanding and exploration of the material. However, individuals using such software do not necessarily have knowledge of a dataset and/or the data visualization software, resulting in an inability to fully investigate data relationships. It is therefore critical that research be conducted to determine which data presentation styles promote intuitive navigation within the data visualization. A within-subjects usability study was conducted to examine the most effective of three ways to display hierarchical data within a designated area of a Tableau visualization. Three distinct visualizations of hierarchical data were randomly shown to participants. Each visualization was bordered by identical contextual information with the centrally placed hierarchical data varying. One condition showed the data relying on filters. A second condition showed the data relying on users to expand and collapse the level of detail with scrolling. A third condition showed the data as a drill-down chart that only expands the level of detail selected by the user. Metrics of user-response time, the accuracy of responses to assessment questions, the subjective rank of usability for each data visualization, and open-ended user feedback were examined. Results are discussed.","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132660357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Product Style Preferences: An Image-based User Study Software Concept","authors":"Pengyu Du, Kin Wai Michael Siu, Yi Shih","doi":"10.54941/ahfe1001715","DOIUrl":"https://doi.org/10.54941/ahfe1001715","url":null,"abstract":"In the market, once producers of a particular product category become mature in their production technology, their products will have few functional differences. Thus, the greatest challenge for designers today lies in developing an appropriate design language that fits the tastes of target users. Designers use many user-study methods (interviews, questionnaires, focus groups) to understand their target users’ tastes. However, these methods mainly rely on language as the core medium of interaction. Because language can be subjective and one-sided, it is difficult to describe abstract concepts such as style preferences. In addition, in such design research, language-based information is transferred from target users to design researchers to designers over several rounds, and the objectivity and accuracy of such information can decrease substantially because of differences in people’s interpretations. This paper reviews product styling-related user study methods and technologies and proposes an image-based user study software concept that minimizes the above problems. This proposed software uses images as its main medium of interaction between target users and designers. It applies artificial intelligence technology to analyze target users’ common style preference patterns based on their image choices and sorting results. The software’s output is each target user’s persona in the form of a perceptual map and mood board. These personas provide objective product style preferences directly from the target users. This software can thus provide designers with intuitive and accurate references and inspire them to design products that meet users’ tastes and improve user experience.","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132207592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An experimental study on the applicability of Fusion display and Overlay display of AR smart city data and information","authors":"Yuechan Zheng, Zhanxun Dong","doi":"10.54941/ahfe1001729","DOIUrl":"https://doi.org/10.54941/ahfe1001729","url":null,"abstract":"Background Smart digital twin cities digitally create virtual models of urban systems,It simulates the behavior of physical entities in a realistic environment with the help of data . And through virtual and real interactive feedback, data fusion analysis, decision iterative optimization and other means to design and build smart city platforms, Thus adding or extending new capabilities to smart city management.In conjunction with the trend towards AR glasses, paperless information design,the visual expression of information models, will be an important trend in the future, In demand for AR smart city related information reading, The right visual expression can assist managers in their exploration of the city's data.The key to AR information presentation is the presentation of the information, i.e. the way information is covered. Based on past research findings,There are two most conventional and operational ways to present,Fusion display and Overlay display. However, which one to use as the dominant form of presentation needs to be explored experimentally.Method In order to investigate which is more suitable for smart city data overlay, Fusion display or Overlay display. This experiment used Rhino 3D tools to build a 3D model of the city, The model contains three smart city operation scenarios: building equipment operation status, community power management system, and smart operation of business district,The content of the message is consistent in each scenario,while it is expressed through both Fusion display and Overlay display. Twenty identical questions and answers were set up for each comparison group's information presentation.The experiment will last for five weeks and the number of recruits will be 32, Recruiters are divided into groups A and B, each with sixteen members. The experiment required the subject to be able to read 12pt Arial regular characters without wearing glasses,Group A uses Fusion display, Group B uses Overlay display,Comparison of accuracy of information delivery and fatigue of information reading by subjects after test subjects have completed the same information questionnaire,The applicability of the interface was verified using the NASA- TLX scale. The specific experiments are as follows:(1)Information accuracy test session, this session is for the experimenter to wear the TOBII eye-movement instrument and conduct a 2 question visual expression test to ensure that the experimenter is familiar with the experimental process.(2) Cognitive experiment, the experimenter first read the questions carefully, after clarifying the questions, the experimenter picked up the tablet computer for AR overlay on the images, then the subject looked at the centre of the screen, received the \"start\" command, clicked on the centre of the screen and started reading to find information and complete the questions, a total of 5 groups of 20 questions, the experimenter recorded the time for each question.(3)Fatigue experiment, in this session","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127300054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing driver’s experience through emotion sensitive lighting interaction","authors":"Franziska Braun, Fabian Edel, Antonio Ardilio","doi":"10.54941/ahfe1001731","DOIUrl":"https://doi.org/10.54941/ahfe1001731","url":null,"abstract":"The vehicle has become an intelligent product by innovative technologies. However, most of the new technologies address human safety, such as lane keeping or emergency brake assistance. The smart optimization of the driver’s well-being through sensitivity of the technology has been neglected so far. Thus, this study investigates the influence of emotion-sensitive lighting on the driver. Individual light interactions are based on detected emotions of the driver to enhance the individual driving experience. IntroductionLight and colors have an archaic effect on humans (Goldstein 2015). They trigger the human diencephalon to release the serotonin as well as cortisol. For example, light blue components in the light suppress melatonin release. This driver feels awake or concentrated. (Sabine Harrasko-Kocmann) Different factors such as traffic jams, aggressive driving behavior of other road users and challenging weather conditions have a negative impact on the driver’s feeling (Bitkina et al. 2019). The consequences are negative emotions such as anger, fear and worry, which in turn lead to inattention and insecurity. Also factors like tiredness and physical exhaustion can impact the human well-beeing. As a result, the driver is unfocused and annoyed after the ride or agitated and restless. Furthermore, stress for example, increases the risk of accidents because people become inattentive (Evers 2011). To counteract this and positively influence the driving experience, specific lighting interactions are intended to enhance the driver's feelings. In this work, lighting is used to influence the driver's mood. The aim of the study is to investigate whether the targeted use of light contributes to increase the well-being and which use-cases are particularly suitable.ApproachFig.1 shows the scientific approach for this work. Firstly, requirements are defined. They are based on a literature review on the psychological effects of light, an investigation of the target groups and a reflection on highly emotional driving situations. For testing, three use cases are defined in detail. The first deals with increasing the driver's concentration, the second with targeted stress reduction and the third with positive reinforcement of emotion. They include the emotional level of the driver, his or her journey start and destination, the target emotion, and the light interaction to achieve this emotion. A vehicle-prototype is modified to test the individual scenarios. Besides LED strips along the windscreen, the door frames and interior ambient lights, screens are installed in the doors and the roof [Fig.2]. In this way, not only individual light colors and intensities can be visualized, but also more complex, dynamic movements. To get a uniform effect the interior lights are synchronized with the visualization of the screens.Fig.2: CAD rendering of the prototype. One screen is placed in each of the doors and one in the roof. LED strips run along the windscreen and door fr","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127337740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effects of Communication Cues on Group Decision Making in Online Conferences","authors":"Tingshu Chen, Ren-Ke He","doi":"10.54941/ahfe1003228","DOIUrl":"https://doi.org/10.54941/ahfe1003228","url":null,"abstract":"Online conferencing has become one of the most common tools we use in our lives and at work today. As a result of the coronavirus disease (COVID-19) pandemic, social mobility was limited, and people around the world had to use computer-mediated communication to connect and collaborate remotely instead of face-to-face interaction. The outbreak led to a surge in the use of online meetings, which are still actively used by a large number of people. However, prolonged or excessive use of online meetings can cause some negative feelings, a phenomenon known as \"Zoom Fatigue.\" The lack of communication cues in online meetings compared to face-to-face interactions can lead to undesirable experiences such as easy distraction or low engagement. Previous research has found that the root causes of zoom fatigue include a lack of non-verbal cues such as spatial voice, facial expressions, eye contact, and body language. It is unclear to what extent these cues influence participants' experiences of social interactions in meetings and the relationship between them. We designed a between-group study to compare the performance and experience of different interaction conditions with different communication cues on meeting teamwork. The communication cues we focused on were spatialized voice (SV), eye contact (EC), facial emotions (FE), and body language (BL). By reducing communication cues, we designed five interaction conditions, namely: an online meeting A1 (no SV), normal face-to-face A2, face-to-face with a mask A3 (no FE), face-to-face with a hidden head A4 (no FE and EC), and face-to-face with a hidden whole body A5 (no FE, EC, and BL). Using the hidden profile paradigm, we set up a fictitious personnel selection case to test group decision-making and investigated whether communication cues made a difference to the cognitive load and social presence in the meeting. In groups of four, we tested 20 participants in different interaction conditions and observed and recorded their behavioral performance. For all participants, we collected their demographic information as well as online meeting usage. To assess the experience of social interactions in meetings, we recorded task completion times and used the Networked Mind Measure of Social Presence Questionnaire (SPQ), the NASA Task Load Index Questionnaire (NASA-TLX), and the System Usability Scale (SUS). Our results found a decrease in task completion time after integrating eye contact. For social presence, spatial voice and facial expressions led to a significant increase. Eye contact and gestures increased attention but decreased perceived information comprehension. Eye contact had no significant effect on joint presence, while gestures enhanced joint presence. For cognitive load, eye contact and facial expressions make it lower, while other cues enhance it. The integration of gestures substantially enhances the usability of social interactions in meetings. In summary, our findings suggest that spatial voice sig","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129100841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Usability Evaluation of Self-Ordering Kiosks in Fast Food Restaurants","authors":"Yi Shan Lee, I. Wen Yen, Meng-Cong Zheng","doi":"10.54941/ahfe1003203","DOIUrl":"https://doi.org/10.54941/ahfe1003203","url":null,"abstract":"In recent years, the introduction of technology and digital management into the service industry has gradually changed consumers' lifestyles. With the popularization of equipment systems, the development and use of self-ordering kiosks have gained much attention in the restaurant industry. However, the utilization rate of self-ordering kiosks has been meager since their establishment, and they cannot relieve the queues at the counter. This study investigates whether self-ordering kiosks in fast food restaurants can meet customers' needs to achieve a self-ordering process and improve ordering efficiency.This study was conducted on three representative fast-food chains in Taiwan, and their self-ordering kiosks were used as the target of the investigation. Thirty participants with no experience were invited to conduct an ordering task experiment to evaluate the ease of use of the ordering interface. The experiment was divided into five contextual tasks: meal selection, modifying the order, changing the meal comment, self-checkout and selecting receipt type, and redeeming e-coupon. All procedures were conducted on the self-ordering kiosk in the fast-food restaurant. The evaluation methods were as follows: 1) the participants performed the tasks sequentially and recorded the time performance, and conducted think-aloud during the tasks; 2) the participants filled out SUS and QUIS questionnaires to indicate their evaluation and feelings about the self-ordering kiosk; 3) semi-structured interviews were conducted to understand the participants' operation and future expectation during the tasks.The results showed that in the operation of the self-ordering kiosk in all three fast-food restaurants, there were cases of incorrect interface operation by the participants. We found that: 1) 33% of the participants indicated that they were unclear about the operation process, and the kiosk prompts were not clear enough to cause uncertainty among the participants. In particular, when an error occurred and it was necessary to go back to the previous step, the prompts in the system interface were ineffective in helping the participants operate smoothly, resulting in a long ordering time. 2) All three self-ordering kiosks use 21.5-inch upright touch screens and a full-page interface design, making it easy for the participants to ignore the information and hardware devices below the interface when operating at close range. For example, 37% of the participants said they could not find the shopping cart at the bottom of the interface and could not confirm the scanner's location the first time when redeeming the e-coupon. 3) 73% of the participants said in the questionnaire that the function of customizable meals is essential, but the degree of customization in the current self-ordering kiosk is relatively low. Finally, the SUS scores of the three quick-service restaurants' self-service ordering kiosks were 56.25 (F grade), 62 (D grade), and 56 (F grade), respectively, wh","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131505098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A User Experience Investigation on Using Augmented Reality Technology for Explaining Step-by-Step Instructions","authors":"Tingyi S. Lin, Yin Ho Wang","doi":"10.54941/ahfe1001695","DOIUrl":"https://doi.org/10.54941/ahfe1001695","url":null,"abstract":"Since Augmented Reality (AR) technology is able to help young children read storybooks and understand the content more interestingly (Tuli & Mantri, 2019), as well as can help students in learning abstract geometry (Flores-Bascuñana et al., 2019), AR technology has the potential capacity to extend the representation from paper to digital, and to bring ideas from virtual to reality. In order to use AR to import step-by-step information more effectively and to provide a friendly user experience, we use AR as an auxiliary tool for step-by-step instructions to help users receive and understand the content of information in diversified ways. However, the differences between real and virtual environments often affect the user's ability to operate the devices, to read the information, and to understand the content. In order to explore how the import of AR-assisted step-by-step instructions affects users' experience during operation, we conducted a task-based test and evaluated how the viewing angle and the viewing distance have an effect on test-takers' reading of documents. We analyzed the relationship between devices and human behaviors to understand users' experience in operation, this study explores 1) what is the best viewing angle and distance when users are reading documents and operating AR system; and 2) how the relative positional relationship of the AR developing between paper-based step-by-step instructions affect users' operation. The results show that: 1) Although all test-takers have no significant difference in terms of operation of viewing angle (p=0.535) and viewing distance (p=0.489), there are significant differences in terms of time spent by test-takers on completing this task (p=0.048); 2) During this experiment, the operating range of a test-taker's viewing angle is about 70~87 degrees, while the operating range of the viewing distance is about 16~23 cm; and 3) The differences between the relative positional relationship of AR developing block and the step-by-step instructions in text has an impact on the test-taker's operating experience. The AR developing block laid out on the right-hand side of the text box allows users to have greater adaptability in operational performance among other layout arrangements.","PeriodicalId":409565,"journal":{"name":"Usability and User Experience","volume":"90 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116248102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}