{"title":"Promoting the use of research evidence from websites: optimising microsurveys as feedback loops to drive improvement.","authors":"Nehal Eldeeb, Cheng Ren, Valerie B Shapiro","doi":"10.1332/17442648Y2025D000000057","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Educators' use of research evidence (URE) from intermediary websites is not well understood. Current studies measure URE through periodic, retrospective user reports or by passively tracking website usage, neither of which adequately inform the continuous improvement efforts of intermediaries. This study examines the feasibility of microsurveys - brief, behaviour-triggered surveys embedded within websites - as a tool for assessing and informing the improvement of URE. Specifically, this article explores configurations to optimise microsurvey response rates.</p><p><strong>Methods: </strong>A plug-in embedded microsurveys across web pages. Microsurveys included a five-point Likert emoticon rating scale and an open-ended follow-up. Four pilot studies tested variations in: (a) question wording, (b) time delays before triggering, (c) number of clicks, and (d) optimised conditions integrating the best configurations. Chi-square tests and logistic regression analysed differences in response rates and relationships between conditions, scores and response rates.</p><p><strong>Results: </strong>Response rates improved by discarding low-performing (that is, low response rate) questions, selecting better time delays, and reducing the number of clicks to complete the microsurvey. Likert scale response rates increased from 4.18 per cent to 11.31 per cent under optimised conditions. Follow-up response rates remained stable, with higher engagement associated with favourable Likert scores.</p><p><strong>Conclusions: </strong>This study establishes the potential of microsurveys for measuring URE from intermediary websites, achieving response rates understood to yield reliable estimates for informing the promotion of evidence in practice. Future research should explore additional configurations to further optimise response rate, integrate microsurveys with observational and behavioural data to assess validity, and study integrating microsurvey feedback into organisational change processes.</p>","PeriodicalId":51652,"journal":{"name":"Evidence & Policy","volume":" ","pages":"1-27"},"PeriodicalIF":2.5000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evidence & Policy","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1332/17442648Y2025D000000057","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Educators' use of research evidence (URE) from intermediary websites is not well understood. Current studies measure URE through periodic, retrospective user reports or by passively tracking website usage, neither of which adequately inform the continuous improvement efforts of intermediaries. This study examines the feasibility of microsurveys - brief, behaviour-triggered surveys embedded within websites - as a tool for assessing and informing the improvement of URE. Specifically, this article explores configurations to optimise microsurvey response rates.
Methods: A plug-in embedded microsurveys across web pages. Microsurveys included a five-point Likert emoticon rating scale and an open-ended follow-up. Four pilot studies tested variations in: (a) question wording, (b) time delays before triggering, (c) number of clicks, and (d) optimised conditions integrating the best configurations. Chi-square tests and logistic regression analysed differences in response rates and relationships between conditions, scores and response rates.
Results: Response rates improved by discarding low-performing (that is, low response rate) questions, selecting better time delays, and reducing the number of clicks to complete the microsurvey. Likert scale response rates increased from 4.18 per cent to 11.31 per cent under optimised conditions. Follow-up response rates remained stable, with higher engagement associated with favourable Likert scores.
Conclusions: This study establishes the potential of microsurveys for measuring URE from intermediary websites, achieving response rates understood to yield reliable estimates for informing the promotion of evidence in practice. Future research should explore additional configurations to further optimise response rate, integrate microsurveys with observational and behavioural data to assess validity, and study integrating microsurvey feedback into organisational change processes.