Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy最新文献

筛选
英文 中文
The relationship between the performance of the star and the shape of basketball sneaker and the prediction of the shape design 球星表现与篮球球鞋造型的关系及造型设计的预测
Xingjian Liu, Xingsong Wang
{"title":"The relationship between the performance of the star and the shape of\u0000 basketball sneaker and the prediction of the shape design","authors":"Xingjian Liu, Xingsong Wang","doi":"10.54941/ahfe1002886","DOIUrl":"https://doi.org/10.54941/ahfe1002886","url":null,"abstract":"With the development of the economy and society, the concept of\u0000 industrial design has undergone great changes, forming the modern\u0000 \"human-centered\" concept. Therefore, the key research topic is how to\u0000 connect products with users and design customized products for users. This\u0000 research combines perceptual engineering, analytic hierarchyprocess and\u0000 experimental calculations, combined with the famous stars of the American\u0000 Basketball Association, to establish a prediction of the feature recognition\u0000 and shape design of stars and basketball shoes. Firstly, we use\u0000 questionnaires, 40 famous fans select the adjectives that best represent\u0000 basketball shoes. Secondly, evaluate the performance of players from mental\u0000 and physical dimensions for quantitative analysis; then evaluate the\u0000 appearance of 20 pairs of basketball shoes (4 stars and each star selects\u0000 the latest 5 generations of basketball shoes) and modeling analysis, to\u0000 classify sneakers that match the stars; in SolidWorks CAD, the vector shape\u0000 is obtained, and the key modeling parameters are calculated. Finally, a\u0000 rough model is established between player performance and shoe modeling in\u0000 the modeling index. Qualitative relationship, and when designing the next\u0000 generation of star signature shoe styling, the product is quantitatively\u0000 estimated to a certain extent and styling is controlled by star design. This\u0000 study found that star performance is strongly related to basketball sneaker\u0000 shape. And it allows designers to grasp the direction of signature shoe\u0000 design in a short time.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115028341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A design approach of proactive HMI based on smart interaction 基于智能交互的主动人机界面设计方法
Xiaohua Sun, Jinglu Li, Weiwei Guo
{"title":"A design approach of proactive HMI based on smart interaction","authors":"Xiaohua Sun, Jinglu Li, Weiwei Guo","doi":"10.54941/ahfe1002823","DOIUrl":"https://doi.org/10.54941/ahfe1002823","url":null,"abstract":"As AI advances, intelligent systems are gaining the ability to\u0000 collaborate with humans to accomplish everyday tasks proactively. In\u0000 proactive HMI design, the accuracy of the user intention prediction model in\u0000 the mechanism becomes the key to affecting the quality of the proactive HMI\u0000 experience. However, there are three issues that caused the lack of\u0000 effective ways to improve the prediction accuracy of user prediction models.\u0000 In this paper, we analyze the Information for improving user prediction\u0000 accuracy, the Intervention stage, and the required contents for smart\u0000 interaction. Then, we develop an approach of the proactive HMI based on\u0000 smart interaction, which is the method that robots learn from the users\u0000 through interactions. We propose the elements, the framework, and the\u0000 guidelines. This paper also provides how to use this approach in design\u0000 case. With this approach, the accuracy of user intention prediction of\u0000 proactive HMI can be improved and then can be achieved the goal of improving\u0000 the design effect and the user experience of proactive HMI can be achieved.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116792190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The feasibility of fatigue state monitoring based on eye parameters in virtual reality environment 虚拟现实环境下基于眼参数的疲劳状态监测的可行性
Jichen Han, Xiaozhou Zhou, Chenglong Zong, Fei Teng
{"title":"The feasibility of fatigue state monitoring based on eye parameters in\u0000 virtual reality environment","authors":"Jichen Han, Xiaozhou Zhou, Chenglong Zong, Fei Teng","doi":"10.54941/ahfe1002843","DOIUrl":"https://doi.org/10.54941/ahfe1002843","url":null,"abstract":"In human‒machine environments, real-time fatigue monitoring of operators\u0000 is particularly important. Excessive fatigue will threaten the operation\u0000 efficiency and even the safety of operators. At present, experimental\u0000 research methods include simulation experiments, half-physical simulations\u0000 and real environment tests. The simulation environment experiment has the\u0000 advantages of low cost, high security, high repeatability and\u0000 anti-interference from external factors. The virtual reality (VR)\u0000 environment is a virtual scene based on a 3D virtual model, which is an\u0000 important development trend of simulation experiment. However, the\u0000 reliability of various measurement methods in VR simulation environment\u0000 needs to be verified. In this study, the Varjo XR3 VR device with the best\u0000 display effect currently on the market is used together with headphones to\u0000 construct a closed simulation experiment scene and at the same time collect\u0000 subjective evaluation of monotonous fatigue and eye parameters. Then, the\u0000 mapping relationship between the two is explored through confirmatory\u0000 experiments. A total of 10 subjects were recruited in this experiment to\u0000 induce fatigue by cruising tasks with low load in flight missions. Subjects\u0000 needed to independently complete the route cruise task for approximately 50\u0000 minutes and report their own fatigue subjective evaluation parameters using\u0000 the Borg scale at the flight turning points every 3 minutes. The data of\u0000 percentage of eyelid closure over the pupil over time (PERCLOS) and blink\u0000 rate were obtained by using head display internal camera image and\u0000 projection vectors algorithm, and the validity of eye data was evaluated by\u0000 correlation validation with monotonic fatigue subjective evaluation\u0000 parameters. The results show that the quality of eye parameters collected by\u0000 VR meets the monitoring requirements, and parameters such as PERCLOS and\u0000 blink rate have a high correlation with subjective monotonic fatigue\u0000 evaluation. Eye monitoring has high availability to evaluate operator\u0000 monotonic fatigue in virtual simulation environment.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116895622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Knowledge-based Generation of Synthetic Data by Taxonomizing Expert Knowledge in Production 基于生产中专家知识分类的合成数据的知识生成
O. Petrovic, David Leander Dias Duarte, S. Storms, W. Herfs
{"title":"Towards Knowledge-based Generation of Synthetic Data by Taxonomizing\u0000 Expert Knowledge in Production","authors":"O. Petrovic, David Leander Dias Duarte, S. Storms, W. Herfs","doi":"10.54941/ahfe1002915","DOIUrl":"https://doi.org/10.54941/ahfe1002915","url":null,"abstract":"Synthetic data is a promising approach for industrial computer vision\u0000 because it can enable highly autonomous production processes. However, this\u0000 potential is not fulfilled by current software for synthetic data\u0000 generation, which usually requires a programmer to create new datasets. To\u0000 overcome this, we are proposing a framework for more autonomous synthetic\u0000 data generation, formalizing user roles relevant to such systems. A central\u0000 aspect of our framework is that domain experts can easily influence the\u0000 generation of synthetic data by entering knowledge via user interfaces. To\u0000 get a better idea of what such knowledge could be, we have systematically\u0000 collected examples of knowledge types for synthetic data generation in\u0000 production and combined them into a taxonomy with almost 300 nodes. Using\u0000 this taxonomy as the basis for analyses, we derive six implications for our\u0000 framework, such as knowledge being not only passed on by domain experts but\u0000 also by the designer of the user interfaces and generation algorithms. We\u0000 plan to incorporate these findings to further refine and implement our\u0000 framework in future research.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Safe and Flexible Collaborative Assembly Processes Using Behavior Trees and Computer Vision 使用行为树和计算机视觉的安全灵活的协同装配过程
Minh Trinh, David Kötter, Ariane Chu, Mohamed H. Behery, G. Lakemeyer, O. Petrovic, C. Brecher
{"title":"Safe and Flexible Collaborative Assembly Processes Using Behavior Trees\u0000 and Computer Vision","authors":"Minh Trinh, David Kötter, Ariane Chu, Mohamed H. Behery, G. Lakemeyer, O. Petrovic, C. Brecher","doi":"10.54941/ahfe1002912","DOIUrl":"https://doi.org/10.54941/ahfe1002912","url":null,"abstract":"Human-robot-collaboration combines the strengths of humans, such as\u0000 flexibility and dexterity, as well as the precision and efficiency of the\u0000 cobot. However, small and medium-sized businesses (SMBs) often lack the\u0000 expertise to plan and execute e.g. collaborative assembly processes, which\u0000 still highly depend on manual work. This paper introduces a framework using\u0000 behavior trees (BTs) and computer vision (CV) to simplify this process while\u0000 complying with safety standards. In this way, SMBs are able to benefit from\u0000 automation and become more resilient to global competition. BTs organize the\u0000 behavior of a system in a tree structure [1], [2]. They are modular since\u0000 nodes can be easily added or removed. Condition nodes check if a certain\u0000 condition holds before an action node is executed, which leads to the\u0000 reactivity of the trees. Finally, BTs are intuitive and human-understandable\u0000 and can therefore be used by non-experts [3]. In preliminary works, BTs have\u0000 been implemented for planning and execution of a collaborative assembly\u0000 process [4]. Furthermore, an extension for an efficient task sharing and\u0000 communication between human and cobots was developed in [5] using the Human\u0000 Action Nodes (H-nodes). The H-node is crucial for BTs to handle\u0000 collaborative tasks and reducing idle times. This node requires the use of\u0000 CV for the cobot to recognize, whether the human has finished her sub-task\u0000 and continue with the next one. In order to do so, the algorithm must be\u0000 able to detect different assembly states and map them to the corresponding\u0000 tree nodes. A further use of CV is the detection of assembly parts such as\u0000 screws. This enables the cobot to autonomously recognize and handle specific\u0000 components. Collaboration is the highest level of interaction between humans\u0000 and cobots [4] due to a shared workspace and task. Therefore, it requires\u0000 strict safety standards that are determined in the DIN EN ISO 10218 and DIN\u0000 ISO/TS 15066 [6], [7], which e.g. regulate speed limits for cobots. The\u0000 internal safety functions of cobots have been successfully extended with\u0000 sensors, cameras, and CV algorithms [8]–[10] to avoid collisions with the\u0000 human. The latter approach uses the object detection library OpenCV [11],\u0000 for instance. OpenCV offers a hand detection algorithm, which is pretrained\u0000 with more than 30.000 images of hands. In addition, it allows for a high\u0000 frame rate, which is essential for real-time safety.In this paper, CV is\u0000 used to enhance the CoboTrees (cobots and BTs) demonstrator within the\u0000 Cluster of Excellence ’Internet of Production’ [12]. The demonstrator\u0000 consists of a six degree-of-freedom Doosan M1013 cobot, which is controlled\u0000 by the Robot Operating System (ROS) and two Intel RealSense D435 depth\u0000 cameras. The BTs are modeled using the PyTrees library [13]. Using OpenCV,\u0000 an object and assembly state detection algorithm is implemented e.g. for use\u0000 in the H-nodes. Since the majority of accidents between robots and humans\u0000 occ","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121968535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual cues improve spatial orientation in telepresence as in VR 视觉线索改善空间定向在远程呈现在虚拟现实
Jennifer Brade, Tobias Hoppe, Sven Winkler, Philipp Klimant, Georg Jahn
{"title":"Visual cues improve spatial orientation in telepresence as in VR","authors":"Jennifer Brade, Tobias Hoppe, Sven Winkler, Philipp Klimant, Georg Jahn","doi":"10.54941/ahfe1002862","DOIUrl":"https://doi.org/10.54941/ahfe1002862","url":null,"abstract":"When moving in reality, successful spatial orientation is enabled\u0000 through continuous updating of egocentric spatial relations to the\u0000 surrounding environment. But in Virtual Reality (VR) or telepresence, cues\u0000 of one’s own movement are rarely provided, which typically impairs spatial\u0000 orientation. Telepresence robots are mostly operated by minimal real\u0000 movements of the user via PC-based controls, which entail a lack of real\u0000 translations and rotations and thus can disrupt spatial orientation. Studies\u0000 in virtual environments show that a certain degree of spatial updating is\u0000 possible without body-based cues to self-motion (vestibular, proprioceptive,\u0000 motor efference) solely through continuous visual information about the\u0000 change in orientation or additional visual landmarks. While a large number\u0000 of studies investigated spatial orientation in virtual environments, spatial\u0000 updating in telepresence remains largely unexplored. VR and telepresence\u0000 environments share the common feature that the user is not physically\u0000 located in the mediated environment and thus interacts in an environment\u0000 that does not correspond to the body-based cues generated by posture and\u0000 self-motion in the real environment. Despite this similarity, virtual and\u0000 telepresence environments also have significant differences in how the\u0000 environment is presented: common, commercially available telepresence\u0000 systems can usually only display the environment on a 2D monitor. The 2D\u0000 monitor impairs the operator's depth perception compared with 3D\u0000 presentation in VR, for instance in an HMD, and interacting by means of\u0000 mouse movements on a 2D plane is indirect compared with moving VR\u0000 controllers and the HMD in 3D space. Thus, it cannot be assumed without\u0000 verification that the spatial orientation in 2D telepresence systems can be\u0000 compared with that in VR systems. Therefore, we employed a standard spatial\u0000 orientation task with a telepresence robot to evaluate if results concerning\u0000 the number of visual cues turn out similar to findings in VR-studies.To\u0000 address the research question, a triangle completion task (TCT) was carried\u0000 out using the telepresence robot Double 3. The participants (n= 30)\u0000 controlled the telepresence robot remotely using a computer and a mouse: At\u0000 first, they moved the robot to a specified point, then they turned the robot\u0000 to orient towards a second specified point, moved there and were then asked\u0000 to return the robot to its starting point. To evaluate the influence of the\u0000 number of visual cues on the performance in the TCT, three conditions that\u0000 varied in the amount of visual information provided for navigating the third\u0000 leg were presented in a within-subjects design. Similar to studies that\u0000 showed support of spatial orientation in TCT by visual cues in VR, the\u0000 number of visual cues available while navigating the third leg supported\u0000 triangle completion with a telepresence robot. This was confirmed by the\u0000 trend of reduced error with more visual ","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126147468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiphase pointing motion model based on hand-eye bimodal cooperative behavior 基于手眼双峰协同行为的多相指向运动模型
Chenglong Zong, Xiaozhou Zhou, Jichen Han, Haiyan Wang
{"title":"Multiphase pointing motion model based on hand-eye bimodal cooperative\u0000 behavior","authors":"Chenglong Zong, Xiaozhou Zhou, Jichen Han, Haiyan Wang","doi":"10.54941/ahfe1002844","DOIUrl":"https://doi.org/10.54941/ahfe1002844","url":null,"abstract":"Pointing, as the most common interaction behavior in 3D interactions,\u0000 has become the basis and hotspot of natural human-computer interaction\u0000 research. In this paper, hand and eye movement data of multiple participants\u0000 in a typical pointing task were collected with a virtual reality experiment,\u0000 and we further clarified the movements of the hand and eye in spatial and\u0000 temporal properties and their cooperation during the whole task process. Our\u0000 results showed that the movements of both the hand and eye in a pointing\u0000 task can be divided into three stages according to their speed properties,\u0000 namely, the preparation stage, ballistic stage and correction stage. Based\u0000 on the verification of the phase division of hand and eye movements in the\u0000 pointing task, we further clarified the phase division standards and the\u0000 relationship between the duration of every pair of phases of hand and eye.\u0000 Our research has great significance for further mining human natural\u0000 pointing behavior and realizing more reliable and accurate human-computer\u0000 interaction intention recognition.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128204455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A data-driven but person-centered assessment framework for sustainable rehabilitation services 可持续康复服务的数据驱动但以人为本的评估框架
Masayuki Ihara, H. Tokunaga, Hiroki Murakami, Shinpei Saruwatari, Kazuki Takeshita, Akihiko Koga, Takashi Yukihira, Shinya Hisano, Ryoichi Maeda, M. Motoe
{"title":"A data-driven but person-centered assessment framework for sustainable\u0000 rehabilitation services","authors":"Masayuki Ihara, H. Tokunaga, Hiroki Murakami, Shinpei Saruwatari, Kazuki Takeshita, Akihiko Koga, Takashi Yukihira, Shinya Hisano, Ryoichi Maeda, M. Motoe","doi":"10.54941/ahfe1002860","DOIUrl":"https://doi.org/10.54941/ahfe1002860","url":null,"abstract":"Utilization of data and information technologies is one of the\u0000 expectations for the future in a health care domain. Although electronic\u0000 health records are used in decision making for medical prescription in many\u0000 hospitals, small nursing care providers are not able to effectively utilize\u0000 data. We aim at development of an online rehabilitation service that\u0000 utilizes data both for designing a rehabilitation plan for each patient and\u0000 for a sustainability of the service. This paper presents a framework for the\u0000 assessment of the rehabilitation that is data-driven but\u0000 person-centered.According to the International Classification of\u0000 Functioning, Disability and Health (ICF), in designing a rehabilitation\u0000 plan, it is important to consider not only the maintenance and improvement\u0000 of physical functions of a patient's body, but also his/her activities\u0000 related to tasks and actions in a daily life and the participation or\u0000 involvement in his/her life situation. Each patient has his/her own\u0000 background in needs for the rehabilitation thus we focus on the\u0000 person-centered care approach where a health care should be based on the\u0000 unique person's needs. The rehabilitation plan should focus on the abilities\u0000 of the person and encourage activity even though the data is actively\u0000 used.The proposed assessment framework consists of a part for evaluating the\u0000 effect of rehabilitation and that for extracting problems in operation of\u0000 the online rehabilitation service. The part of rehabilitation effect\u0000 evaluation is based on Japanese version of the Cardiovascular Health Study\u0000 frailty index: weight loss, slow gait speed, low physical activity,\u0000 exhaustion, and low grip strength. The part also includes\u0000 questionnaire-based indices of subjective happiness and willingness for\u0000 social activities. On the other hand, the part of the problem extraction for\u0000 a sustainable operation of the service includes an interoperability between\u0000 the nursing facility site and the patient's home in terms of an online\u0000 service connecting them for a video-based rehabilitation exercise. The part\u0000 is based on the questionnaires and interviews for workers at the nursing\u0000 facility as well as those for the patient and family.In this case study, we\u0000 introduce an example of the proposed framework at the step of a service\u0000 design and discuss how to apply it to the service operation step. In a\u0000 rehabilitation service domain, neither the data distribution platform nor\u0000 the data bank is currently in operation. However, a rehabilitation\u0000 assessment system utilized the platform and data bank would be in service in\u0000 the future. For a sustainability of the service, it is important to\u0000 successfully integrate data, technologies, and human as a stakeholder. In\u0000 this paper, we also discuss a person-centered design for the integration\u0000 with a focus on considering life backgrounds and sense of values of the\u0000 patient as well as his/her home environment and risk management for the\u0000 rehabilitation exercise.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132034789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of interface beauty factors on the cognitive performance of news web interface visual information 界面美因素对新闻网站界面视觉信息认知表现的影响
Dingwen Zhang
{"title":"The influence of interface beauty factors on the cognitive performance of\u0000 news web interface visual information","authors":"Dingwen Zhang","doi":"10.54941/ahfe1002909","DOIUrl":"https://doi.org/10.54941/ahfe1002909","url":null,"abstract":"At present, there are many researches on the cognitive performance of\u0000 digital interface information at home and abroad, but there are relatively\u0000 few researches on the influence of interface beauty factors on the\u0000 performance in the news web interface environment. This study selects three\u0000 factors of interface beauty factors, namely balance factor, symmetry factor\u0000 and density factor, as the variables of this study, to explore the influence\u0000 of interface beauty factors on the cognitive performance of news web\u0000 interface visual information. The research is divided into two parts:\u0000 interface beauty factor calculation and interface visual information\u0000 cognitive performance experiment. The results show that there is no obvious\u0000 linear relationship between the three factors and visual information\u0000 cognitive performance, but the model formulas of interface balance,\u0000 interface symmetry, interface density and average reaction time can be\u0000 obtained by quadratic curve fitting. The results are of great significance\u0000 to guide designers to design and evaluate the news web page interface in a\u0000 quantitative way from the perspective of improving information cognitive\u0000 performance.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130895554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-verse Non-Heritage in Art Exhibitions: Virtual Reality Contextual Narrative Across Time and Space 展览中的元宇宙非遗产:跨时空的虚拟现实语境叙事
Yu jian
{"title":"Meta-verse Non-Heritage in Art Exhibitions: Virtual Reality Contextual\u0000 Narrative Across Time and Space","authors":"Yu jian","doi":"10.54941/ahfe1002904","DOIUrl":"https://doi.org/10.54941/ahfe1002904","url":null,"abstract":"The meta-universe is a digital living space constructed by humans using\u0000 digital technologies such as digital twin, virtual reality, Internet of\u0000 Things, and cloud computing to connect reality and imagination. This mirror\u0000 simulation of the real world breaks the boundaries of time and space and\u0000 builds a new digital space-time context. Digital technology through sensing\u0000 and VR makes people experience the retro atmosphere constructed by virtual\u0000 reality across space and time while perceiving the real space-time embodied\u0000 experience, which is the missing means to break the boundaries of space-time\u0000 to experience the contextual narrative, revitalize historical relics,\u0000 interpret historical stories, and achieve living heritage in art\u0000 exhibitions. The concept of \"contextual narrative\" in the meta-universe ICH\u0000 in the art exhibition will use digital modalities to recreate ICH scenes,\u0000 and in terms of research methodology, it will draw on the narrative\u0000 structure, characteristics, characters and plot of ICH scenes for\u0000 inter-temporal construction.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126806568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信