{"title":"Review of the state-of-the-art (session summary)","authors":"W. Humphrey","doi":"10.5555/317498.317687","DOIUrl":null,"url":null,"abstract":"The opening session of the 5th International Software Process Workshop covered a wide range of topics so it is not possible to capture its full scope in this brief summary. This paper briefly outlines the main views expressed by the participants and then provides a precis of the participants' comments. Because of the dynamic nature of those discussions, however, I have taken the liberty of grouping related points for a more coherent presentation. I also take full responsibility for any errors or omissions.\nEven though this first session was entitled “state-of-the-art,” there was little discussion of actual process modeling experience. From the many examples given in the proceedings, however, there was considerable evidence of practical experience and the discussions reflected a general consensus that process modeling methods have been found both practical and helpful. While no examples were given of the unsuccessful application of formal methods, there was a strong minority view that such low-technology methods as procedure manuals and software standards were widely used and are often quite effective.\nThere was also general agreement that tools which include an implicit process were not true process models. To qualify as a process model, the process used by the tool should be explicitly defined. A strong consensus also held that the time had now come to more broadly apply these modeling methods to a range of well-known software process activities. It was felt this would provide useful insights on their development and introduction.\nWhile there was no focused discussion on the objectives of process modeling, the purposes noted fell into three general categories:to provide a precise framework for understanding, experimenting with, and reasoning about the process;\nto facilitate process automation;\nto provide a basis for process control.\n\nAn important role of process models is improvement of the combined human/technology activities involved in producing software. Because of the dynamic nature of such people intensive work, it was suggested that these models should include the recursive capability to improve themselves.\nA subject that was widely discussed and returned to again in subsequent workshop sessions was the special impact of the human element in software process models. While it was agreed that the human element adds considerable complexity, there were widely divergent viewpoints. These ranged from considering human-related issues as outside our area of competence to believing that such issues were central to all process work.\nBill Curtis opened this first session with a discussion of key issues and the following challenge: “How much of actual software development behavior will be affected (by process modeling) and what will be the benefit?” He then divided software process issues into two classes: the control process and the learning process. The former concerns management's need for an orderly framework for evaluating progress while the latter more closely approximates the exploratory and often intuitive nature of much software development work. Sam Redwine noted that software engineering could likely learn from the management methods used in developing and managing teams in professional sports.\nThe ensuing discussion was then focused by Bill Curtis' suggestion that configuration management would be a good place to initially apply process models since this well-understood function clearly distinguishes between product creations and the control of product evolution. Peter Feiler pointed out that configuration management should be viewed as having both support and control functions since it both helps the professionals do quality work and provides a controlled framework for system evolution. A number of other software development areas were then suggested for process modeling, including bug tracking, the product build process, and testing. Anthony Finkelstein noted that his process research focused on aspects of the requirements process because he feels this area has less support, is less well-defined, and that any results are thus more likely to have a substantial impact.\nMark Kellner raised the issue of the objectives of process modeling. With traditional software development, for example, software products are executed by machines while process programs must be understood and at least partially performed by people. This causes a fundamental paradigm shift. Bill Curtis suggested that an important role of process programs is to provide models for experimentation and learning. Peter Feiler also noted that process programs provide a precise basis for documenting, communicating, validating, simulating, controlling, and automating software development work. Sam Redwine further pointed out that an attachment to Manny Lehman's paper included a comprehensive listing of the potential roles of process models.\nBill Curtis then asked how the subject of process programming differed from traditional computer science. Colin Tully noted that with process programs, we are trying to describe processes that are not entirely executed by machine. In this effort, we have tended to accept the existing paradigms for software development. Since these do not seem to fit too well, he questioned whether we are on the right track and if we know what paradigms are most appropriate.\nGail Kaiser and Frank Belz both felt that we would learn much from the paradigms of real-time systems design since both involve multiple asynchronous views of complex activities. Peter Feiler questioned whether process models differed that much from many other areas which involve people and tools in an overall process. A common problem is the search for promising areas to automate.\nKaren Huff then made the observation that when dealing with people we can often focus on what we want done rather than the more explicit details of how it is to be accomplished. Watts Humphrey added that distinctions should be made between dealing with machines and people. With people, for example, the analog of the instruction set is neither clear, consistent across environments, or stable. There are also questions of motivation and accuracy and, as demonstrated by manufacturing experience, people do not perform very well when treated as machines. As demonstrated by the Japanese in automobile production or by Alcoa with aluminum sheet production, human performance is enhanced when they feel they own the process they are using. This is achieved in such cases by involving them in continuous process improvement. Colin Tully pointed out that this was Manny Lehman's issue with process programming.\nDave Garlan next asked why, in a state-of-the-art session, we were not hearing much about experience. Was it that there were no real successes? Lolo Penedo pointed to the PML-PCE work (described in the Roberts and Snowdon papers) as a good example. Wilhelm Schaefer noted they had considerable success in formally modeling the Berlin headquarters of a large software operation. Dieter Rombach said that there is a growing body of modeling experience and that by devoting too much effort to selecting the right formalism we might repeat the futile programming search for the one right language. Since there is not likely to be one best formalism, we should pick some real process examples and use available methods to model them.\nBill Curtis then raised the question of the qualifications for a process program. Is MAKE, for example, a process program? Bob Balzer contended that a tool with a built in, though not explicit, process was not a process program. Dewayne Perry agreed, although he felt that MAKE partially qualified in that it coordinated the operation of other tools. Frank Belz then pointed out one of the dangers of building implied processes into tools. By selecting a single way to do a job which could be done in several different ways, the entire process is constrained to this single alternative. Taly Minsky noted an important distinction between tools and the larger processes which control them. With configuration management, for example, we are not dealing with one person systems. Often the actions of a single individual can impact many others. This generally requires a consensus decision system. When one module is to be changed or promoted, the other involved modules must be, so to speak, consulted. When conflicts arise, these must be flagged and resolved before the proposed action can be taken. This requires sets of rules which raises the further question of what rules are appropriate and who writes them (see discussion of session #3 on policies). Taly also suggested that there should likely be a rule-making hierarchy with different people having different rule-making authority.\nBob Balzer then suggested a separation of process modeling issues. One category concerned the activity domains which we can learn about and mechanize. As we see what is successful, we can make changes to provide further improvements. The other issue concerns the formal representation of the process, including its use of tools. Mark Dowson objected to the neatness of this paradigm. He interpreted Bob Balzer as saying that you first devise a model of the activity of interest, you than formalize its representation, and then finally mechanize it for execution. Actual processes, he feels, do not work this way. We typically start with a vague idea of how to proceed, hack up a mechanization which works, and then improve it until it performs effectively. When we have worked with it long enough, we may finally understand the process and may in fact end up in much the same place. Steve Reiss suggested that in this work we should distinguish between those classes of models which can be mechanized and those that cannot. The latter, he feels, have to be dynamic because they generally deal with human behavior and management issues.\nWatts Humphrey suggested that Bob Balzer include a third category: developing and improving the","PeriodicalId":414925,"journal":{"name":"International Software Process Workshop","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1990-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Software Process Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5555/317498.317687","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The opening session of the 5th International Software Process Workshop covered a wide range of topics so it is not possible to capture its full scope in this brief summary. This paper briefly outlines the main views expressed by the participants and then provides a precis of the participants' comments. Because of the dynamic nature of those discussions, however, I have taken the liberty of grouping related points for a more coherent presentation. I also take full responsibility for any errors or omissions.
Even though this first session was entitled “state-of-the-art,” there was little discussion of actual process modeling experience. From the many examples given in the proceedings, however, there was considerable evidence of practical experience and the discussions reflected a general consensus that process modeling methods have been found both practical and helpful. While no examples were given of the unsuccessful application of formal methods, there was a strong minority view that such low-technology methods as procedure manuals and software standards were widely used and are often quite effective.
There was also general agreement that tools which include an implicit process were not true process models. To qualify as a process model, the process used by the tool should be explicitly defined. A strong consensus also held that the time had now come to more broadly apply these modeling methods to a range of well-known software process activities. It was felt this would provide useful insights on their development and introduction.
While there was no focused discussion on the objectives of process modeling, the purposes noted fell into three general categories:to provide a precise framework for understanding, experimenting with, and reasoning about the process;
to facilitate process automation;
to provide a basis for process control.
An important role of process models is improvement of the combined human/technology activities involved in producing software. Because of the dynamic nature of such people intensive work, it was suggested that these models should include the recursive capability to improve themselves.
A subject that was widely discussed and returned to again in subsequent workshop sessions was the special impact of the human element in software process models. While it was agreed that the human element adds considerable complexity, there were widely divergent viewpoints. These ranged from considering human-related issues as outside our area of competence to believing that such issues were central to all process work.
Bill Curtis opened this first session with a discussion of key issues and the following challenge: “How much of actual software development behavior will be affected (by process modeling) and what will be the benefit?” He then divided software process issues into two classes: the control process and the learning process. The former concerns management's need for an orderly framework for evaluating progress while the latter more closely approximates the exploratory and often intuitive nature of much software development work. Sam Redwine noted that software engineering could likely learn from the management methods used in developing and managing teams in professional sports.
The ensuing discussion was then focused by Bill Curtis' suggestion that configuration management would be a good place to initially apply process models since this well-understood function clearly distinguishes between product creations and the control of product evolution. Peter Feiler pointed out that configuration management should be viewed as having both support and control functions since it both helps the professionals do quality work and provides a controlled framework for system evolution. A number of other software development areas were then suggested for process modeling, including bug tracking, the product build process, and testing. Anthony Finkelstein noted that his process research focused on aspects of the requirements process because he feels this area has less support, is less well-defined, and that any results are thus more likely to have a substantial impact.
Mark Kellner raised the issue of the objectives of process modeling. With traditional software development, for example, software products are executed by machines while process programs must be understood and at least partially performed by people. This causes a fundamental paradigm shift. Bill Curtis suggested that an important role of process programs is to provide models for experimentation and learning. Peter Feiler also noted that process programs provide a precise basis for documenting, communicating, validating, simulating, controlling, and automating software development work. Sam Redwine further pointed out that an attachment to Manny Lehman's paper included a comprehensive listing of the potential roles of process models.
Bill Curtis then asked how the subject of process programming differed from traditional computer science. Colin Tully noted that with process programs, we are trying to describe processes that are not entirely executed by machine. In this effort, we have tended to accept the existing paradigms for software development. Since these do not seem to fit too well, he questioned whether we are on the right track and if we know what paradigms are most appropriate.
Gail Kaiser and Frank Belz both felt that we would learn much from the paradigms of real-time systems design since both involve multiple asynchronous views of complex activities. Peter Feiler questioned whether process models differed that much from many other areas which involve people and tools in an overall process. A common problem is the search for promising areas to automate.
Karen Huff then made the observation that when dealing with people we can often focus on what we want done rather than the more explicit details of how it is to be accomplished. Watts Humphrey added that distinctions should be made between dealing with machines and people. With people, for example, the analog of the instruction set is neither clear, consistent across environments, or stable. There are also questions of motivation and accuracy and, as demonstrated by manufacturing experience, people do not perform very well when treated as machines. As demonstrated by the Japanese in automobile production or by Alcoa with aluminum sheet production, human performance is enhanced when they feel they own the process they are using. This is achieved in such cases by involving them in continuous process improvement. Colin Tully pointed out that this was Manny Lehman's issue with process programming.
Dave Garlan next asked why, in a state-of-the-art session, we were not hearing much about experience. Was it that there were no real successes? Lolo Penedo pointed to the PML-PCE work (described in the Roberts and Snowdon papers) as a good example. Wilhelm Schaefer noted they had considerable success in formally modeling the Berlin headquarters of a large software operation. Dieter Rombach said that there is a growing body of modeling experience and that by devoting too much effort to selecting the right formalism we might repeat the futile programming search for the one right language. Since there is not likely to be one best formalism, we should pick some real process examples and use available methods to model them.
Bill Curtis then raised the question of the qualifications for a process program. Is MAKE, for example, a process program? Bob Balzer contended that a tool with a built in, though not explicit, process was not a process program. Dewayne Perry agreed, although he felt that MAKE partially qualified in that it coordinated the operation of other tools. Frank Belz then pointed out one of the dangers of building implied processes into tools. By selecting a single way to do a job which could be done in several different ways, the entire process is constrained to this single alternative. Taly Minsky noted an important distinction between tools and the larger processes which control them. With configuration management, for example, we are not dealing with one person systems. Often the actions of a single individual can impact many others. This generally requires a consensus decision system. When one module is to be changed or promoted, the other involved modules must be, so to speak, consulted. When conflicts arise, these must be flagged and resolved before the proposed action can be taken. This requires sets of rules which raises the further question of what rules are appropriate and who writes them (see discussion of session #3 on policies). Taly also suggested that there should likely be a rule-making hierarchy with different people having different rule-making authority.
Bob Balzer then suggested a separation of process modeling issues. One category concerned the activity domains which we can learn about and mechanize. As we see what is successful, we can make changes to provide further improvements. The other issue concerns the formal representation of the process, including its use of tools. Mark Dowson objected to the neatness of this paradigm. He interpreted Bob Balzer as saying that you first devise a model of the activity of interest, you than formalize its representation, and then finally mechanize it for execution. Actual processes, he feels, do not work this way. We typically start with a vague idea of how to proceed, hack up a mechanization which works, and then improve it until it performs effectively. When we have worked with it long enough, we may finally understand the process and may in fact end up in much the same place. Steve Reiss suggested that in this work we should distinguish between those classes of models which can be mechanized and those that cannot. The latter, he feels, have to be dynamic because they generally deal with human behavior and management issues.
Watts Humphrey suggested that Bob Balzer include a third category: developing and improving the