New Directions in Project Management by Paul C. Tinnirello

Assessment Questions

Assessment questions are designed to enable an assessor (who may be an outside consultant or staff members drawn from the organization undergoing assessment) to gather enough information to understand the software organization, the application of technology within it, and the relative sophistication of the project management framework for applying the technology. An effective software engineering process assessment approach uses three types of questions: qualitative, Boolean, and quantitative.

Qualitative Questions. Questions in this category require a narrative explanation.

Some qualitative questions are:

§ How are project teams formed? Is a functional or matrix organization used?

§ Who are the customers for software within the organization?

§ What is the relationship between the customer and the people who develop software? Who initially specifies products with software content? To what degree are software development practices understood by the customer?

What communication problems occur between customers and the software engineering organization?

§ What is the role of quality assurance, manufacturing, and service organizations with regard to software.

§ What are the individual software development tools (available as operating system features and as stand alone functions) used during software development?

§ Boolean Questions. Questions in this category elicit a yes or no response.

Boolean questions are used to assess the following three areas:

§ Design — Does the software development organization use a specific method for data design? For architectural design? Is procedural design constrained to the use of the structured programming constructs? Is there a defined method for human–computer interface design?

§ Programming and coding — Is more than 90 percent of code written in a high-order language? Are specific conventions for code documentation defined and used?

§ Testing — Are specific methods used for test case design? Does test planning begin before code is written? Are the results of testing stored for historical reference? Are there mechanisms for routinely performing regression testing and for ensuring that testing covers all software requirements?

Quantitative Questions. Questions in this category enable an organization to obtain numerical information that can be used in conjunction with software metrics to compute costs and potential payback for new technology. The following information is representative:

§ Annual revenue reported by a component

§ Annual budget for data processing or IS

§ Annual budget for engineering/product-oriented software development

§ Annual budget for software-related training

§ Annual budget for computer hardware

§ Annual budget for software tools (differentiate between hardware and software)

§ Number of systems and software practitioners in all application areas

§ Number of IS people by job category

§ Number of software people working on engineered products and systems

§ Current number of outside contractors working on software in-house

§ Percentage of software people working on maintenance

§ Projected growth or decrease for each aforementioned item

Most assessment questionnaires are organized in a way that probes specific process attributes (e.g., software quality assurance or project management approach); most suggest a grading scheme for responses so that relative strengths and weaknesses can be ascertained; and most address both management and technical topics. The structure of the questionnaire, the types of questions asked, the grading scheme that is proposed, and the usefulness of the results are determined by the overall process assessment model (examples of which are discussed in the section headed

“Process Assessment Models”).

Response Evaluation

The responses to the assessment questionnaire are evaluated to determine the process maturity level. Although specific evaluation approaches vary, the following steps are common:

§ Responses to Boolean questions are used to derive a maturity value. Maturity values may be based on a simple count of yes/no responses, on a specific set of questions that must be answered positively to achieve a given maturity level, or on a weighting scheme that defines the maturity level of an organization that answers a specific question positively.

§ Responses to quantitative questions are compared with industry averages, when available. Both quality and productivity data is collected, and compared averages are published in the technical literature.

§ Responses to qualitative questions are used to derive additional insight into the current process. By documenting local conditions and constraints, qualitative responses establish a baseline for interpretation.

Interpreting the Results

The maturity values computed from the responses to Boolean assessment questions can provide a means for developing a transition plan for process improvement.

Ideally, maturity values are assigned to one of the several process attributes. On the basis of each maturity value, an organization can rank process attributes according to their importance and impact on local efforts to improve process. After priorities have been assigned for process attribute areas, interpretation begins with the goal of developing an organizationally specific set of findings and recommendations.

Findings describe specific areas of strength or weakness; recommendations define the actions required to improve the software development process.

PROCESS ASSESSMENT MODELS

A process assessment model defines the overall structure and logistics of the process assessment, the organization and application of assessment questions, the process attributes that are considered during the assessment, and the manner in which process maturity is determined. Assessment models can be broadly categorized as follows:

§ Models developed by large companies and originally intended for internal use, such as the Hewlett-Packard Software Quality and Productivity Analysis (SQPA) and the Bell Canada Software Development Capability Assessment Method

§ Models developed as an adjunct to consulting services, such as Howard Rubin Associates, R.S. Pressman & Associates, Inc., Software Productivity Research, Inc., and many others

§ Models developed by government/industry consortiums such as the Software Engineering Institute Capability Maturity Model, which is the best known of these

§ Models packaged as do-it-yourself products for use by any software development organization

In addition, the International Standards Organization (ISO) is currently at work on a standard for software engineering process assessment to ensure compliance to ISO

9000 quality standards. At present, no assessment model meets all of the proposed requirements for the ISO assessment standard.

A detailed discussion of all types of assessment models is beyond the scope of this chapter. However, to provide a further understanding of the assessment approach, two representative assessment models are considered in the following sections.

The SEI Assessment Model

The Software Engineering Institute comprehensive assessment model is predicated on a set of software engineering capabilities that should be present as organizations reach different levels of process maturity. To determine the current state of process maturity of an organization, the SEI uses an assessment questionnaire and a five-point grading scheme. The grading scheme provides a measure of the global effectiveness of the software engineering practices of a company and establishes five process maturity levels:

§ Level 1: Initial — The software process is characterized as ad hoc. Few processes are defined, and success depends on individual effort.

§ Level 2: Repeatable — Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.

§ Level 3: Defined — The software process for both management and engineering activities is documented, standardized, and integrated into an organizationwide software process. This level includes all characteristics defined for level 2.

§ Level 4: Managed — Detailed measures of the software process and product quality are collected so that both the software process and products are quantitatively controlled. This level includes all characteristics defined for level 3.

§ Level 5: Optimizing — Continuous process improvement is enabled by quantitative feedback from the process and from testing innovative ideas and technologies. This level includes all characteristics defined for level 4.

To achieve specific levels of process maturity, selected questions from the SEI questionnaire must be answered positively. The SEI has associated key process areas (KPAs) with each of the maturity levels. KPAs describe those software engineering functions that must be present to constitute good practice at a particular level. Across the maturity model 18 KPAs are defined and are mapped into different levels of process maturity. Assessment questions are designed to probe for the existence (or lack) of key practices that reveal whether the goals of a KPA have been achieved.

The SEI approach represents a significant achievement in process assessment, but it has some drawbacks. Although detailed analysis of the assessment questionnaire can lead to an assessment of the efficacy of key process areas and related key practices, the maturity level alone tells little about individual KPAs. The process maturity level is computed in a way that causes a low grade if specific questions are answered negatively, even if other questions that represent reasonable sophistication are answered with a yes. The SEI questionnaire is sometimes criticized for underemphasizing the importance of technology and overemphasizing the importance of policies and standards. Consultants who are accredited assessors are usually needed to provide the additional detail and insight that is missing with the SEI questionnaire alone.

The assessment model proposed by the SEI represents the most comprehensive look in the mirror for the industry. It requires broad-based organizational commitment, an assessment budget in the thousands of dollars, and the presence of accredited assessors to do the work.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115

Leave a Reply 0

Your email address will not be published. Required fields are marked *