{"title":"Text correction in pen-based computers: an empirical comparison of methods","authors":"T. V. Gelderen, A. Jameson, Arne L. Duwaer","doi":"10.1145/259964.260115","DOIUrl":"https://doi.org/10.1145/259964.260115","url":null,"abstract":"Three methods for correcting text in pen-based computers were compared in an experiment involving 30 subjects. In spite of simulated virtually perfect character recognition, the two methods involving handwriting proved 25~0 slower than the method involving a “virtual keyboard”. There was essentially no difference between the detectable errors: 2 missing between-word spaces, which were to be inserted using a “space” gesture or a space key (on the virtual keyboard); 2 superfluous within-word spaces, to be removed using a “delete” gesture or key; and 2 incorrect letters which were to be overwritten with the correct letter (with the handwriting methods) or deleted prior to insertion of the correct letter (with the virtual keyboard). Apparatus A Philips Advanced Interactive Display (PAID), was used as the pen-based computer. It had a VGA (640 x 480 pixel) 11“ LCD (backlit) display with a stylus attached to the display by a thin cable. Subjects wrote directly on the screen, and immediate feedback was given of the resulting “electronic ink”. With the two handwriting methods, the sentence was displayed in a window comprising 6 rows of 15 boxes (one for each letter, each box measuring 1 x 1 cm); changes to the sentence were reflected in the same window. A Wizard-of-Oz technique was used to simulate essentially perfect character recognition, so as to eliminate the noise that would be introduced into the data by imperfect automatic character recognition: The experimenter worked at a hidden desktop computer whose screen showed the same display as that of the pen-based computer. With the help of specially written software, the experimenter caused the symbols written by the subject to be handled exactly aa if the computer had (correctly) interpreted them; for example, in the delay condition, the results of the subject’s actions were displayed after each delay of 1.5 sees. With the virtual keyboard method, the screen displayed at the top the sentence to be corrected and at the bottom the virtual keyboard, in which each key measured 0.8 x0.8 cm. Subjects The 30 paid subjects (mean age: 25) had no previous experience with pen-based computing, but all had experience with the use of a keyboard. Design Each subject performed 1 correction task with each of the 3 methods (as well as with 5 other methods not discussed here, of which 4 involved handwriting and 1 involved a different type of virtual keyboard). For each subject, the order of using the different methods was randomized, as was the selection of the sentence to be corrected with each method. Procedure Subjects were first given a general introduction and a practice session lasting 20 minutes to acquaint them with all of the variants used. Then each subject performed, with each method, 3 tasks: 2 (not analysed here) that involved only entering a given sentence, followed by 1 text correction task aa described above. RESULTS The time to execute each text correction task was measured between presentation of the se","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115711094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Working towards rich and flexible file representations","authors":"Stephanie Houde, G. Salomon","doi":"10.1145/259964.259974","DOIUrl":"https://doi.org/10.1145/259964.259974","url":null,"abstract":"T’oday, icons are commonly used to represent files. In recent years, they have become increasingly more expressive. Initially, in command line systems, text labels alone were used to identify files. With the introduction of graphical user interfaces, generic document and application icons were inutxtuced (see fig la). Over the years, file icons took on an appcamnce that reflects the application usd to crwted them (ltig lb). More rtxently, some applications (e.g. Adobe’s P’hotoshop, Apple’s QuickTime MoviePlayer) produce file icons that serve as proxies[2] of the document’s contents (Fig. lc). These proxies are essentially visual miniatures of the document. There are, however, other types of proxies possible. This paper builds on the recognized trend toward towards information-rich icons. It provides several examples Otfhow systems can emphasize a file’s unique characteristics and thereby facilitate the often necessary task of browsing.","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125765719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mode preference in a simple data-retrieval task","authors":"Alexander I. Rudnicky","doi":"10.1145/259964.260078","DOIUrl":"https://doi.org/10.1145/259964.260078","url":null,"abstract":"This paper describes some recent experiments that assess user behavior in a multi-modal environment in which actions can be performed with equivalent effect in speech, keyboard or scroiler modes. Results indicate that users freely choose speech over other modalities, even when it is less efficient in objective terms, such as time-to-completion or input error.","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing in virtual reality: perception-action coupling and form semantics","authors":"G. Smets, K. Overbeeke, P. Stappers","doi":"10.1145/259964.259975","DOIUrl":"https://doi.org/10.1145/259964.259975","url":null,"abstract":"In this papx, we describe work on a CAD package we are developing for use in virtual reality. Although this research is only preliminary, it demonstrates some advantages of designing in virtual reality. We describe these advantages in terms of ecological approach to perception, focusing on two of the implications of this approach: the role of perception-action coupling in producing true direct manipulation, and the desirability of providing perceptual information about the affordances of objects in the design environment. KEWVORDS: vhtual reality, CAD, ecological approaches","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122762961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Back to the future: a graphical layering system inspired by transparent paper","authors":"Matt Belge, Ishantha Lokuge, David Rivers","doi":"10.1145/259964.260145","DOIUrl":"https://doi.org/10.1145/259964.260145","url":null,"abstract":"Many graphics systems today use transparent layers to help users crganize information. However, due to problems in the User Interface desigm these systems often confuse users anddistract them frcnnthe tasktbey are trying toaccOmplish. Before the advent of desktop computers, people managed similar problems by drawing on sheets of plastic transpmnt paper (transparencies). Believing that layering is a powerful technique, we reexamined the qualities of these transparencies as a source of inspiration. This gave us some innovative ideas. We built a prototype. Pilot studies performed on the prototype show promising results.","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116865744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-base user interface design by example and by answering questions","authors":"Martin R. Frank, J. Foley","doi":"10.1145/259964.260172","DOIUrl":"https://doi.org/10.1145/259964.260172","url":null,"abstract":"Model-based user interface design is based on a description of application objects and operations at a level of abstraction higher than that of code. A good model can be used to assist in designing the user interface, support multiple interfaces, help separate interface and application, describe input sequencing in a simple way, check consistency and completeness of the interfaee, evaluate its speed-of-use and generate context-specific textual and animated help. However, designers rarely use computer-supported application modelling today and prefer less formal approaches such as using a story board of interface prototypes. One reason is that available tools use special-purpose languages for the model spw ification. Another reason is that these tools force the designers to specify the application model before they can start working on the visual interface, which is their main area of expertise. We present a novel methodology for concurrent development of the user interface and the application model which overcomes both problems by combining story-boarding and model-based interface design.","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128621397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Wizard of Oz platform for the study of multimodal systems","authors":"D. Salber, J. Coutaz","doi":"10.1145/259964.260126","DOIUrl":"https://doi.org/10.1145/259964.260126","url":null,"abstract":"The Wizard of Oz (WOz) technique is an experimental evaluation mechanism. It allows the observation of a user operating an apparently fully functioning system whose missing services are supplemented by a hidden wizard. In the absence of generalizable theories and models for the design and evaluation of multimodal systems, the WOz technique is an appropriate approach to the identification of sound design solutions. We show how the WOz technique can be extended to the study of multimodal interfaces and we introduce the Neimo platform as an illustration of our early experience in the development of such platforms.","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114528945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using video scenarios to present consumer product interfaces","authors":"R. Kolli","doi":"10.1145/259964.260071","DOIUrl":"https://doi.org/10.1145/259964.260071","url":null,"abstract":"INTRODUCTION In the initial stages of new product development, designers present alternative concepts through sketches, storyboards, interactive prototypes and physical mock-up models. These representations are useful for communication with the design team, the client and for early usability testing with users. In caseof highly intemctive consumer electronic products (stereo systems,video cameras, fax machines, telephones etc.), LCD displays, buttons, sliders and other user control elements are closely integrated with the three dimensional product form. Hence, an assessment of the product interface necessarily involves the product form as well.","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126428726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A framework for describing interactions with graphical widgets","authors":"M. Chen","doi":"10.1145/259964.260146","DOIUrl":"https://doi.org/10.1145/259964.260146","url":null,"abstract":"Decnbmg the user interaction and visual feedback provided by a graphical widget is currently done through combined v ritten description with visual interaction snap-sho~. This approach is laborious and can be repetitive if all the widgets in a Graphical User Interface (GUI) must be documented. Furthermore. such a de.scriptiort does not necessary reveal common widget behavior. nor does it directly guide a person in creating a new widget. One need$ to infer standard behavior from the existing widget set before a new and consistent widget can be designed","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130061988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A taxonomy of graphical presentation","authors":"R. Spence","doi":"10.1145/259964.260138","DOIUrl":"https://doi.org/10.1145/259964.260138","url":null,"abstract":"A taxonomy of graphical presentation is proposed which is Won four mutually orthogonal transformations. It allows a range of peaentation techniques to be simply descrii.","PeriodicalId":350454,"journal":{"name":"INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128940040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}