J. Gómez, Rikk Carey, Tony Fields, A. V. Dam, Dan Venolia
{"title":"Why is 3-D interaction so hard and what can we really do about it?","authors":"J. Gómez, Rikk Carey, Tony Fields, A. V. Dam, Dan Venolia","doi":"10.1145/192161.192299","DOIUrl":null,"url":null,"abstract":"understood; does the addition of a dimension change the fundamental nature of the problem? What can developers do to decrease the level of complexity for a user to work with a 3-D scene? This panel will cover the role of an Application Program Interface (API) in providing user interaction capabilities, how performance affects the issue, and the concept of “user experience.” Historically, for graphics systems, the API has been the link between the hardware and developer, and the developer has been the link between the hardware and the user. As hardware has grown in power, API’s have grown in complexity and power provided to the developer, with a corresponding change presented to the user. Until recently, however, Human Computer Interaction (HCI) has not been an important part of that process. APIs frequently have poor or no support for HCI, and all too frequently developers provide HCI that simply exposes the API to the user. Users are not programmers, however, so the API should support more of an interface than a basic link between user and machine. The question is: just what is it that can be provided? One approach to the solution would be to provide computing analogs of tools that already exist in the user’s discipline. This is not adequate; the existing tools are oftimes anachronistic, having been developed during some prior state of technology and living on through inertia. In addition, physical tools are limited by physical reality; this is not a limitation for computers, where it is frequently useful to work in a mode that can’t exist in real life. Thus, the scope of tool development should not be limited by the range of what is already available. Current practice includes the use of “widgets,” which are mechanisms inserted into the 3-D scene that can be directly manipulated by the user to cause some change to the scene and/or the objects within it. Even this process raises some questions, for example, should the widgets be fully participating 3-D objects, casting shadows etc., or should they be some metaphysical tools that are there but not really there? Or, should a widget be multifunctional depending on the context in which it is used? In addition, there is the issue of how performance affects interaction. In real life, 3-D manipulation is immediate and in fact generally involves some kind of real-time feedback loop (“real time” is used here in its technical rather than its marketing meaning). Many contemporary graphics systems can’t provide this kind of throughput, leading to the issue of how and if interaction techniques should be modified in the presence of slower update rates. The most general issue is one along the lines of: just what is involved in presenting interaction capabilities to the user. Providing widgets allows the user to interact, but shouldn’t there be a theme to what the widgets look like and how they behave? It’s reasonable for a user to expect consistency in the working environment, therefore it’s reasonable for the graphics system to have all aspects of 3-D interaction defined, including colors, modifying widget behavior by manipulating the widget, how widgets present themselves onscreen, etc. The problem becomes one of not just “user interface” but “user experience,” so it addresses not just what pixels to put on the screen, but rather the complete experience of interacting with a 3-D scene.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/192161.192299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
understood; does the addition of a dimension change the fundamental nature of the problem? What can developers do to decrease the level of complexity for a user to work with a 3-D scene? This panel will cover the role of an Application Program Interface (API) in providing user interaction capabilities, how performance affects the issue, and the concept of “user experience.” Historically, for graphics systems, the API has been the link between the hardware and developer, and the developer has been the link between the hardware and the user. As hardware has grown in power, API’s have grown in complexity and power provided to the developer, with a corresponding change presented to the user. Until recently, however, Human Computer Interaction (HCI) has not been an important part of that process. APIs frequently have poor or no support for HCI, and all too frequently developers provide HCI that simply exposes the API to the user. Users are not programmers, however, so the API should support more of an interface than a basic link between user and machine. The question is: just what is it that can be provided? One approach to the solution would be to provide computing analogs of tools that already exist in the user’s discipline. This is not adequate; the existing tools are oftimes anachronistic, having been developed during some prior state of technology and living on through inertia. In addition, physical tools are limited by physical reality; this is not a limitation for computers, where it is frequently useful to work in a mode that can’t exist in real life. Thus, the scope of tool development should not be limited by the range of what is already available. Current practice includes the use of “widgets,” which are mechanisms inserted into the 3-D scene that can be directly manipulated by the user to cause some change to the scene and/or the objects within it. Even this process raises some questions, for example, should the widgets be fully participating 3-D objects, casting shadows etc., or should they be some metaphysical tools that are there but not really there? Or, should a widget be multifunctional depending on the context in which it is used? In addition, there is the issue of how performance affects interaction. In real life, 3-D manipulation is immediate and in fact generally involves some kind of real-time feedback loop (“real time” is used here in its technical rather than its marketing meaning). Many contemporary graphics systems can’t provide this kind of throughput, leading to the issue of how and if interaction techniques should be modified in the presence of slower update rates. The most general issue is one along the lines of: just what is involved in presenting interaction capabilities to the user. Providing widgets allows the user to interact, but shouldn’t there be a theme to what the widgets look like and how they behave? It’s reasonable for a user to expect consistency in the working environment, therefore it’s reasonable for the graphics system to have all aspects of 3-D interaction defined, including colors, modifying widget behavior by manipulating the widget, how widgets present themselves onscreen, etc. The problem becomes one of not just “user interface” but “user experience,” so it addresses not just what pixels to put on the screen, but rather the complete experience of interacting with a 3-D scene.