José Arguedas Castillo, Walter Alvarez Grijalba, Randall Araya Campos, Paula Estrella
{"title":"Analysis of images to identify specific user actions","authors":"José Arguedas Castillo, Walter Alvarez Grijalba, Randall Araya Campos, Paula Estrella","doi":"10.1109/JoCICI48395.2019.9105211","DOIUrl":null,"url":null,"abstract":"The study of any writing process, such as translation or programming, typically involves collecting data about user actions including keystrokes and mouse clicks. A tool designed for this type of data collection is ResearchLogger and, in addition to keystrokes and clicks, it collects screenshots at predefined intervals and images around the mouse pointer. The main roadblock for researchers using this tool is analyzing the images it generates, which are very diverse and usually need to be processed manually. In particular, one action that is very hard to recognize without using the images is text selection; this action is very important since it might indicate copying, replacing or formatting of a text passage. In this paper, we present two solutions that support researchers in the review of such images and we test them using data from a previous pilot study. Results show that they can be useful to detect images with the specific user action of text selection.","PeriodicalId":154731,"journal":{"name":"2019 IV Jornadas Costarricenses de Investigación en Computación e Informática (JoCICI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IV Jornadas Costarricenses de Investigación en Computación e Informática (JoCICI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/JoCICI48395.2019.9105211","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The study of any writing process, such as translation or programming, typically involves collecting data about user actions including keystrokes and mouse clicks. A tool designed for this type of data collection is ResearchLogger and, in addition to keystrokes and clicks, it collects screenshots at predefined intervals and images around the mouse pointer. The main roadblock for researchers using this tool is analyzing the images it generates, which are very diverse and usually need to be processed manually. In particular, one action that is very hard to recognize without using the images is text selection; this action is very important since it might indicate copying, replacing or formatting of a text passage. In this paper, we present two solutions that support researchers in the review of such images and we test them using data from a previous pilot study. Results show that they can be useful to detect images with the specific user action of text selection.