{"title":"连接定量和定性数字体验测试","authors":"Ranjitha Kumar","doi":"10.1145/3539618.3591873","DOIUrl":null,"url":null,"abstract":"Digital user experiences are a mainstay of modern communication and commerce; multi-billion dollar industries have arisen around optimizing digital design. Usage analytics and A/B testing solutions allow growth hackers to quantitatively compute conversion over key user journeys, while user experience (UX) testing platforms enable UX researchers to qualitatively analyze usability and brand perception. Although these workflows are in pursuit of the same objective - producing better UX - the gulf between quantitative and qualitative testing is wide: they involve different stakeholders, and rely on disparate methodologies, budget, data streams, and software tools. This gap belies the opportunity to create a single platform that optimizes digital experiences holistically: using quantitative methods to uncover what and how much and qualitative analysis to understand why. Such a platform could monitor conversion funnels, identify anomalous behaviors, intercept live users exhibiting those behaviors, and solicit explicit feedback in situ. This feedback could take many forms: survey responses, screen recordings of participants performing tasks, think-aloud audio, and more. By combining data from multiple users and correlating across feedback types, the platform could surface not just insights that a particular conversion funnel had been affected, but hypotheses about what had caused the change in user behavior. The platform could then rank these insights by how often the observed behavior occurred in the wild, using large-scale analytics to contextualize the results from small-scale UX tests. To this end, a decade of research has focused on interaction mining: a set of techniques for capturing interaction and design data from digital artifacts, and aggregating these multimodal data streams into structured representations bridging quantitative and qualitative experience testing[1-4]. During user sessions, interaction mining systems record user interactions (e.g., clicks, scrolls, text input), screen captures, and render-time data structures (e.g., website DOMs, native app view hierarchies). Once captured, these data streams are aligned and combined into user traces, sequences of user interactions semanticized by the design data of their UI targets [5]. The structure of these traces affords new workflows for composing quantitative and qualitative methods, building toward a unified platform for optimizing digital experiences.","PeriodicalId":425056,"journal":{"name":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bridging Quantitative and Qualitative Digital Experience Testing\",\"authors\":\"Ranjitha Kumar\",\"doi\":\"10.1145/3539618.3591873\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Digital user experiences are a mainstay of modern communication and commerce; multi-billion dollar industries have arisen around optimizing digital design. Usage analytics and A/B testing solutions allow growth hackers to quantitatively compute conversion over key user journeys, while user experience (UX) testing platforms enable UX researchers to qualitatively analyze usability and brand perception. Although these workflows are in pursuit of the same objective - producing better UX - the gulf between quantitative and qualitative testing is wide: they involve different stakeholders, and rely on disparate methodologies, budget, data streams, and software tools. This gap belies the opportunity to create a single platform that optimizes digital experiences holistically: using quantitative methods to uncover what and how much and qualitative analysis to understand why. Such a platform could monitor conversion funnels, identify anomalous behaviors, intercept live users exhibiting those behaviors, and solicit explicit feedback in situ. This feedback could take many forms: survey responses, screen recordings of participants performing tasks, think-aloud audio, and more. By combining data from multiple users and correlating across feedback types, the platform could surface not just insights that a particular conversion funnel had been affected, but hypotheses about what had caused the change in user behavior. The platform could then rank these insights by how often the observed behavior occurred in the wild, using large-scale analytics to contextualize the results from small-scale UX tests. To this end, a decade of research has focused on interaction mining: a set of techniques for capturing interaction and design data from digital artifacts, and aggregating these multimodal data streams into structured representations bridging quantitative and qualitative experience testing[1-4]. During user sessions, interaction mining systems record user interactions (e.g., clicks, scrolls, text input), screen captures, and render-time data structures (e.g., website DOMs, native app view hierarchies). Once captured, these data streams are aligned and combined into user traces, sequences of user interactions semanticized by the design data of their UI targets [5]. The structure of these traces affords new workflows for composing quantitative and qualitative methods, building toward a unified platform for optimizing digital experiences.\",\"PeriodicalId\":425056,\"journal\":{\"name\":\"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"volume\":\"78 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3539618.3591873\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539618.3591873","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bridging Quantitative and Qualitative Digital Experience Testing
Digital user experiences are a mainstay of modern communication and commerce; multi-billion dollar industries have arisen around optimizing digital design. Usage analytics and A/B testing solutions allow growth hackers to quantitatively compute conversion over key user journeys, while user experience (UX) testing platforms enable UX researchers to qualitatively analyze usability and brand perception. Although these workflows are in pursuit of the same objective - producing better UX - the gulf between quantitative and qualitative testing is wide: they involve different stakeholders, and rely on disparate methodologies, budget, data streams, and software tools. This gap belies the opportunity to create a single platform that optimizes digital experiences holistically: using quantitative methods to uncover what and how much and qualitative analysis to understand why. Such a platform could monitor conversion funnels, identify anomalous behaviors, intercept live users exhibiting those behaviors, and solicit explicit feedback in situ. This feedback could take many forms: survey responses, screen recordings of participants performing tasks, think-aloud audio, and more. By combining data from multiple users and correlating across feedback types, the platform could surface not just insights that a particular conversion funnel had been affected, but hypotheses about what had caused the change in user behavior. The platform could then rank these insights by how often the observed behavior occurred in the wild, using large-scale analytics to contextualize the results from small-scale UX tests. To this end, a decade of research has focused on interaction mining: a set of techniques for capturing interaction and design data from digital artifacts, and aggregating these multimodal data streams into structured representations bridging quantitative and qualitative experience testing[1-4]. During user sessions, interaction mining systems record user interactions (e.g., clicks, scrolls, text input), screen captures, and render-time data structures (e.g., website DOMs, native app view hierarchies). Once captured, these data streams are aligned and combined into user traces, sequences of user interactions semanticized by the design data of their UI targets [5]. The structure of these traces affords new workflows for composing quantitative and qualitative methods, building toward a unified platform for optimizing digital experiences.