{"title":"Scientific LogAnalyzer: a web-based tool for analyses of server log files in psychological research.","authors":"Ulf-Dietrich Reips, Stefan Stieger","doi":"10.3758/bf03195576","DOIUrl":"https://doi.org/10.3758/bf03195576","url":null,"abstract":"<p><p>Scientific LogAnalyzer is a platform-independent interactive Web service for the analysis of log files. Scientific LogAnalyzer offers several features not available in other log file analysis tools--for example, organizational criteria and computational algorithms suited to aid behavioral and social scientists. Scientific LogAnalyzer is highly flexible on the input side (unlimited types of log file formats), while strictly keeping a scientific output format. Features include (1) free definition of log file format, (2) searching and marking dependent on any combination of strings (necessary for identifying conditions in experiment data), (3) computation of response times, (4) detection of multiple sessions, (5) speedy analysis of large log files, (6) output in HTML and/or tab-delimited form, suitable for import into statistics software, and (7) a module for analyzing and visualizing drop-out. Several methodological features specifically needed in the analysis of data collected in Internet-based experiments have been implemented in the Web-based tool and are described in this article. A regression analysis with data from 44 log file analyses shows that the size of the log file and the domain name lookup are the two main factors determining the duration of an analysis. It is less than a minute for a standard experimental study with a 2 x 2 design, a dozen Web pages, and 48 participants (ca. 800 lines, including data from drop-outs). The current version of Scientific LogAnalyzer is freely available for small log files. Its Web address is http://genpsylab-logcrunsh.unizh.ch/.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"304-11"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195576","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24672617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human use regulatory affairs advisor (HURAA): learning about research ethics with intelligent learning modules.","authors":"Xiangen Hu, Arthur C Graesser","doi":"10.3758/bf03195569","DOIUrl":"https://doi.org/10.3758/bf03195569","url":null,"abstract":"<p><p>The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"241-9"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24672687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tracking attention with the focus-window technique: the information filter must be calibrated.","authors":"Michael N Jones, D J K Mewhort","doi":"10.3758/bf03195573","DOIUrl":"https://doi.org/10.3758/bf03195573","url":null,"abstract":"<p><p>Many researchers have recommended the focus-window tracking (FWT) technique to study the course of information acquisition and visual attention. In FWT, the stimulus is blurred, and the subject scans for information by moving a clear window with a computer mouse. We show that poor calibration of the blur filter will bias search behavior in FWT because diagnostically useful information may leak through a filter even though the subjects are unaware of it. To provide a valid index of information acquisition, FWT requires an information filter that is calibrated to the task.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"270-6"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24672614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dawn G Blasko, Victoria A Kazmerski, Carla N Torgerson
{"title":"COR V2: teaching observational research with multimedia courseware.","authors":"Dawn G Blasko, Victoria A Kazmerski, Carla N Torgerson","doi":"10.3758/bf03195570","DOIUrl":"https://doi.org/10.3758/bf03195570","url":null,"abstract":"<p><p>Courseware for Observational Research (COR Version 2) is an interactive multimedia program designed to teach the foundation of the scientific method: systematic observation. COR uses digital video with interactive coding to teach basic concepts, such as creating precise operational definitions; using frequency, interval, and duration coding; developing sampling strategies; and analyzing and interpreting data. Through lessons, a case study, and laboratory exercises, it gradually scaffolds students from teacher-directed learning into self-directed learning. The newest addition to COR is a case study in which students work collaboratively, using their own observations to make recommendations about a child's disruptive behavior in an after-school program. Evaluations of the lessons showed that classes using COR received better grades on their field observations than did those using methods that are more traditional. Students' confidence and knowledge increased as they moved through each section of the program.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"250-5"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195570","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24672688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shayne Loft, Andrew Hill, Andrew Neal, Michael Humphreys, Gillian Yeo
{"title":"ATC-lab: an air traffic control simulator for the laboratory.","authors":"Shayne Loft, Andrew Hill, Andrew Neal, Michael Humphreys, Gillian Yeo","doi":"10.3758/bf03195579","DOIUrl":"https://doi.org/10.3758/bf03195579","url":null,"abstract":"<p><p>Air Traffic Control Laboratory Simulator (ATC-lab) is a new low- and medium-fidelity task environment that simulates air traffic control. ATC-lab allows the researcher to study human performance of tasks under tightly controlled experimental conditions in a dynamic, spatial environment. The researcher can create standardized air traffic scenarios by manipulating a wide variety of parameters. These include temporal and spatial variables. There are two main versions of ATC-lab. The mediumfidelity simulator provides a simplified version of en route air traffic control, requiring participants to visually search a screen and both recognize and resolve conflicts so that adequate separation is maintained between all aircraft. The low-fidelity simulator presents pairs of aircraft in isolation, controlling the participant's focus of attention, which provides a more systematic measurement of conflict recognition and resolution performance. Preliminary studies have demonstrated that ATC-lab is a flexible tool for applied cognition research.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"331-8"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195579","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24671358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sue Finch, Geoff Cumming, Jennifer Williams, Lee Palmer, Elvira Griffith, Chris Alders, James Anderson, Olivia Goodman
{"title":"Reform of statistical inference in psychology: the case of memory & cognition.","authors":"Sue Finch, Geoff Cumming, Jennifer Williams, Lee Palmer, Elvira Griffith, Chris Alders, James Anderson, Olivia Goodman","doi":"10.3758/bf03195577","DOIUrl":"https://doi.org/10.3758/bf03195577","url":null,"abstract":"<p><p>Geoffrey Loftus, Editor of Memory & Cognition from 1994 to 1997, strongly encouraged presentation of figures with error bars and avoidance of null hypothesis significance testing (NHST). The authors examined 696 Memory & Cognition articles published before, during, and after the Loftus editorship. Use of figures with bars increased to 47% under Loftus's editorship and then declined. Bars were rarely used for interpretation, and NHST remained almost universal. Analysis of 309 articles in other psychology journals confirmed that Loftus's influence was most evident in the articles he accepted for publication, but was otherwise limited. An e-mail survey of authors of papers accepted by Loftus revealed some support for his policy, but allegiance to traditional practices as well. Reform of psychologists' statistical practices would require more than editorial encouragement.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"312-24"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195577","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24672618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David A Washburn, Michael J Rulon, Jonathan P Gulledge
{"title":"A new breed of computer users: rats control a cursor via joystick manipulation.","authors":"David A Washburn, Michael J Rulon, Jonathan P Gulledge","doi":"10.3758/bf03195562","DOIUrl":"https://doi.org/10.3758/bf03195562","url":null,"abstract":"<p><p>Many exciting findings have been reported in the 15 years since monkeys were first demonstrated to have the capacity to perform computerized tasks. The present data indicate that albino rats can also learn to respond to computer-generated stimuli by manipulating a joystick. Although the rat's control of the cursor is not as skillful as has been reported for primate species, it is clearly better than chance and suggests the great potential for comparative investigation afforded by use of the computer test system.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"173-9"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195562","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24673272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard Guest, Nick Donnelly, Michael Fairhurst, Jonathan Potter
{"title":"Using image analysis techniques to analyze figure-copying performance of patients with visuospatial neglect and control groups.","authors":"Richard Guest, Nick Donnelly, Michael Fairhurst, Jonathan Potter","doi":"10.3758/bf03195581","DOIUrl":"https://doi.org/10.3758/bf03195581","url":null,"abstract":"<p><p>Figure copying is often used to detect visuospatial neglect (VSN) in brain-damaged patients. We describe algorithms that enable the computation of parameters for describing figure-copying performance. The researcher can readily implement these algorithms on a computer using image analysis software, and they provide information on goodness-of-fit, relative to a standard model, as well as on dynamic aspects of subjects' performance in completing figure copies. To demonstrate the clinical utility of these algorithms, preliminary results from a group of right-hemisphere brain-damaged patients, some of whom have VSN, are compared with those for age-matched controls.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"347-54"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195581","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24671360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A web-based family problem-solving intervention for families of children with traumatic brain injury.","authors":"Shari L Wade, Christopher R Wolfe, John P Pestian","doi":"10.3758/bf03195572","DOIUrl":"https://doi.org/10.3758/bf03195572","url":null,"abstract":"<p><p>We developed a Web-based intervention for pediatric traumatic brain injury (TBI) and examined its feasibility for participants with limited computer experience. Six families, including parents, siblings, and children with TBI, were given computers, Web cameras, and high-speed Internet access. Weekly videoconferences with the therapist were conducted after participants completed on-line interactive experiences on problem solving, communication, and TBI-specific behavior management. Families were assigned to videoconference with NetMeeting (iBOT cameras) or ViaVideo. Participants ranked the Web site and videoconferences as moderately to very easy to use. ViaVideo participants rated videoconferencing significantly more favorably relative to face-to-face meetings than did NetMeeting participants. Both the Web site and videoconferencing were rated as very helpful. All families demonstrated improved outcomes on one or more target behaviors, including increased understanding of the injury and improved parent-child relationships. All parents and siblings and all but 1 child with TBI said they would recommend the program to others. We conclude that a face-to-face intervention can be successfully adapted to the Web for families with varied computer experience.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"261-9"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195572","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24672613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-validating presentation and response timing in cognitive paradigms: how and why?","authors":"Richard R Plant, Nick Hammond, Garry Turner","doi":"10.3758/bf03195575","DOIUrl":"https://doi.org/10.3758/bf03195575","url":null,"abstract":"<p><p>With the proliferation of commercial experiment generators and custom software within cognitive psychology and the behavioral sciences, many have assumed that issues regarding millisecond timing accuracy have been largely solved. However, through empirical investigation of a variety of paradigms, we have discovered numerous sources of timing error. These can range from poor scripting practices, to incorrect timing specifications, to hardware variability. Building upon earlier research, we have developed a commercial device and associated software that enables researchers to benchmark most computer-based paradigms in situ and without modification. This gives them the opportunity to correct timing errors where practicable, increase replicability, and reduce variability by altering onset times for stimuli, by replacing inaccurate hardware, or by post hoc statistical manipulation should the source of error be constant. We outline the features of the device and accompanying software suite, stress the importance of such independent validation, and highlight typical areas that can be subject to error.</p>","PeriodicalId":79800,"journal":{"name":"Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc","volume":"36 2","pages":"291-303"},"PeriodicalIF":0.0,"publicationDate":"2004-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3758/bf03195575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"24672616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}