New Directions in Project Management by Paul C. Tinnirello

§ A management method

§ Data collection procedures

§ Ongoing program training

The management method should incorporate an integrated group/committee that performs each of the measurement activities. This group is similar to a conventional project steering committee and includes representatives from general management, IS management, system users, and IS development/maintenance. It differs from a project management group primarily because it survives beyond the implementation of the program. As a long-term, ongoing process, measurement must have a long-term, ongoing management committee.

The management group is critical to the success of a measurement program because it keeps program participants aware of the importance of their activities and provides the broad view necessary to the survival of the program. The committee determines how the program goals evolve over time and serves as a touchstone for their achievement.

It is also beneficial to establish a metrics user group to share experiences and coordinate training. Training is a key element of the metrics infrastructure that should be periodically provided. Training programs encompass data collection, analysis, and interpretation procedures so that project participants understand not only how to collect data, but also how to apply it. The infrastructure should also include tools to automate the collection and analysis phases of measurement, as well as a consolidated database to store all metrics data.

Because the results of a measurement program are not immediately discernible, people must perform many tasks before a visible, attributable outcome is perceived.

Training is one way to ensure that all project participants understand the goals and procedures that underlie what may appear, at times, to be a slow process. It also helps to alleviate employee concerns about the application of the measurement program results.

Many of the activities of measurement require the cooperation of diverse groups of people. Even though the concepts of metrics and measurement conjure up an image of required technical expertise, the most appropriate leader for the measurement program is an individual with excellent communication and negotiation skills.

Information about programs used in other companies is helpful during the process of defining program objectives and formulating metrics. The recommended reading list provides a source for gathering this information.

RECOMMENDED COURSE ACTION

Each day new articles describe a new practice touted to yield tremendous productivity and efficiency improvements in the IS organization. Some IS managers have discovered that it can take many years to apply the so-called best practices of other organizations recorded in the literature. A measurement program affords IS

managers the opportunity to develop local proof of what really works. More important, the information produced from a measurement program helps IS

managers better understand and control the process of software development and maintenance.

Creating a custom-tailored measurement program thus provides IS managers with information about the unique behavioral patterns of their organization. In doing so, it helps these managers control their professional destinies as well.

NOTES

3. R.S. Kaplan and D.P. Norton, “Using the Balanced Scorecard as a Strategic Management System,” Harvard Business Review, January-February 1996.

4. L.H. Putnam and W. Myers, Measures for Excellence, Englewood Cliffs, NJ: Yourdon Press, 1992, p. 11.

5. V.R. Basili and D.M. Weiss, “A Methodology for Collecting Valid Software Engineering Data,” IEEE Transactions on Software Engineering, 10, no. 3, 728–738, 1984.

6. M.K. Daskalantonakis, “A Practical View of Software Measurement and Implementation Experiences within Motorola,” IEEE Transactions on Software Engineering, 18, no. 11, 998–1010, 1992.

7. R.S. Kaplan and D.P. Norton, “The Balanced Scorecard: Measures That Drive Performance,” Harvard Business Review, January-February, 71–79, 1992.

BIBLIOGRAPHY

1. Grady, R.B. Practical Software Metrics for Project Management and Process Improvement. Englewood Cliffs, NJ: Prentice-Hall, 1992.

2. Paulish, D.J. and Carleton, A.D. “Case Studies of Software-Process-Improvement Measurement.” IEEE Computer, September, 50–57, 1994.

3. Putnam, L.H., and Myers, W. Measures for Excellence, Englewood Cliffs, NJ: Yourdon Press, 1992.

4. Roche, J. and Jackson, M. “Software Measurement Methods: Recipes for Success?” Information and Software Technology, 36, no. 3, 173–189, 1994.

Chapter 45: Software Process Assessment:

Building the Foundation for a Mature IS

Process

Roger S. Pressman

OVERVIEW

Managers and technical staff in most companies are all too quick to select new methods and tools and proceed toward modern software engineering practice. The problem is that many of these same managers and technical people have a weak understanding of the development and maintenance process that is currently being applied within their organizations. They proceed without a firm foundation or an understanding of where they are. As a result, new technologies sometimes fail to provide the benefits that are expected.

Companies struggle with software engineering because managers fail to understand that a software engineering approach is one part of a broader total quality management philosophy. Even when this fact is understood, some managers never connect the concept of kaizen, or continuous process improvement, to software development activities.

W. Edwards Deming defined quality as striving for excellence in reliability and functions by continuous (process) improvement, supported by statistical analysis of the causes of failure. If an organization wants to improve the quality of its software, thereby enabling information technology to better serve the business, it must focus its attention on improving the process through which software is developed. The starting point is assessment — a look-in-the-mirror approach that enables managers and technical staff to better understand their software development strengths and weaknesses. Process assessment is a first step toward the creation of a viable strategy that will serve as a road map for continuous software process improvement.

A COMMON SENSE PROCESS IMPROVEMENT STRATEGY

Process assessment is the initial step in a technology transition cycle that spans many process improvement activities. The cycle begins with assessment and encompasses several other activities, as illustrated in Exhibit 1: Exhibit 1. A Technology Transition Cycle for Process Improvement

§ Education — Most software managers and developers know relatively little about software engineering. To increase the level of software engineering knowledge, an organization must develop an effective education strategy that is tied to the results of the process assessment and that coordinates training content and timing with immediate project needs so that maximum benefit can be attained.

§ Selection — Selection defines specific goals and criteria for choosing software engineering procedures, methods, and computer-aided software engineering tools; it leads to the development of a rational mechanism for costing, justifying, and acquiring these important elements of software engineering technology.

§ Justification — Expenditures for software engineering procedures, methods, education, CASE tools, and associated support activities must be shown to provide a return on investment before money is committed. A justification model is used to demonstrate the bottom- line benefits of process improvement.

§ Installation — To install software engineering technologies successfully, a transition plan must be devised and executed. The plan defines tasks, responsibilities, milestones, and deliverables and specifies a schedule for getting the work done.

§ Evaluation — Some managers make changes to improve the development process, select and install new technology, and then stick their heads in the sand, dedicating little time to evaluating whether the technology is working.

The evaluation step initiates an ongoing assessment of the CASE/software engineering installation process.

All these steps define transition strategy, and they all depend on a successful process assessment. In the remainder of the chapter, the first step — process assessment — is considered in greater detail.

OBJECTIVES OF A PROCESS ASSESSMENT

Although informal software process audits have been conducted for many years, the use of a formal process assessment is relatively new, and it was not until process

assessment was endorsed by the Software Engineering Institute (SEI) that major corporations and government agencies began to adopt the practice.

The term process assessment refers to both qualitative and quantitative information gathering. When process assessment is properly conducted, it satisfies its objectives by:

§ Providing a framework for an objective examination of the software development practices of an organization

§ Indicating technical and management strengths and weaknesses in a way that allows for comparison to industry norms

§ Indicating the relative software development maturity of an organization

§ Leading to a strategy for process improvement and, indirectly, to the improvement of software quality

Process Attributes

To accomplish these objectives, the process assessment approach should be designed in a way that probes each of the following process attributes:

§ Organizational policies that guide the use of software engineering practices

§ Training that supports the use of procedures, methods, and tools

§ The framework (procedural model) that has been established to define a software engineering process

§ Quality assurance (QA) activities for software

§ Project management tasks that plan, control, and monitor software work

§ Software engineering methods that allow technical staff to build high-quality applications

§ CASE tools that support the methods

§ Software metrics and measurement that provide insight into the process and its product

STRUCTURE OF A PROCESS ASSESSMENT

Although there are many different process assessment approaches, all have the same basic structure. First, a set of questions that probe process maturity are asked and answered. The questions may focus solely on procedural issues or may delve into the application of software engineering technology. Responses to the assessment questions are evaluated and a process maturity level is computed. The maturity level represents the commitment and adherence of an organization to sound software engineering and QA practices. Finally, the results of the assessment are interpreted and used to develop a process improvement strategy. Interpretation may be global or may target specific process attributes.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115

Leave a Reply 0

Your email address will not be published. Required fields are marked *