WHAT MAKES AN EFFECTIVE METRIC?
It is wise to consider the criteria commonly employed to judge the usefulness of proposed metrics before selecting metrics that answer the questions deemed important to the goals of IS. A metric should be:
§ Understandable — If a metric is difficult to define or interpret, chances are that it will not be used or it will be applied inconsistently.
§ Quantifiable — Because metrics must be objective, IS managers should strive to reduce the amount of personal influence or judgment that must be attached to a given metric.
§ Cost-Effective — The value of the information obtained from a measurement procedure must exceed the cost of collecting data, analyzing patterns, interpreting results, and validating correctness. A given metric should be relatively easy to capture and compute, and measurement should not interfere with the actual process of creating and delivering information systems.
§ Proven — Many proposed metrics appear to have great worth but have not been validated or shown to have value in the drive to improve IS. IS
managers should steer clear of metrics that appear overly complex or have not been tested and shown to be consistent or meaningful.
§ High-Impact — Although some metrics, such as cyclomatic comp lexity, offer an effective way of predicting testing time and possibly corrective maintenance time, they may not provide enough information to make their collection and calculation worthwhile in all situations. If the products being measured have relatively similar levels of complexity, it is more helpful to gather metrics with a more significant impact. For example, it is well documented that one programmer can make a program very complex, whereas another can produce elegant, concise code. The effects of different code on actual testing and correction time, however, pale in comparison to the effects of incomplete or inaccurate design specifications. Therefore, the metric with the most impact in this case relates to the accuracy of design specifications rather than to program complexity.
IMPLEMENTING A MEASUREMENT PROGRAM
Athough use of a measurement program appears to be a rational management approach backed by documented successes, some organizations find implementation a difficult undertaking. Implementing a measurement program is not a trivial task, but rather a significant action requiring management commitment. The two key challenges in implementing a measurement program are time and communication.
Key Challenges
Time A measurement program is not a quick fix for a broken process with benefits that are quickly realized. Data must be gathered and analyzed over time before the program yields information that people can translate into actions that improve the development and maintenance process. It takes time to create a metric baseline, evaluate the results, and choose appropriate new actions. Then it takes additional time to compare new information about those new actions against the baseline to gauge improvements. Implementation of a measurement program is best viewed as a critical component of long-term continuous improvement.
Communication Part of making a measurement program work is convincing people that it will lead to organizational improvements. If program participants are not convinced of the importance of the program, chances are the effort will be abandoned before meaningful data is collected and used. If people believe that the results of the measurement program will be used to distribute blame unfairly regarding projects and products, then they will not participate in the program.
A key challenge of program implementation is thus communicating prospective benefits to the diverse audiences that will collect, analyze, interpret, and apply the information. At the same time, the proposed use of the measurement information must be made clear to all participants.
Program Activities
Although the success of a measurement program cannot be guaranteed, IS
managers can increase the odds that implementation will prevail by paying attention to the individual activities composing the program. Exhibit 3 shows the activities necessary to implement and maintain an IS measurement program. Each activity is described in the sections that follow.
Exhibit 3. Activities of an IS Measurement Program
Assessment The three primary functions of assessment are: 1. Evaluating the current position of the organization
2. Identifying the goals of a measurement program
3. Establishing specific measurement goals
Since the mid-1980s, formal software process assessments such as those from the SEI and Software Productivity Research (SPR) have been available to evaluate the software development processes of an organization. Assessment provides a clear picture of the current organizational environment and serves as a starting point from which to gauge future improvements. For example, it would be unreasonable to state that a new development methodology provided increased programmer productivity unless the level of productivity before its implementation was known and documented.
During the assessment phase, it is also important to define the goals of the measurement procedure. Another activity performed during assessment is selling the measurement program to management and IS staff. All participants in the program must understand the relationship between measurement and improvement so that they will support the resulting program.
Formulation A measurement program requires the formulation of specific, quantifiable questions and metrics to satisfy the program goals. The previously discussed suggestions for choosing appropriate metrics and sample goals/questions/metrics provide a good starting point.
Collection The collection of specific metrics requires a cost-accounting system aimed at gathering and storing specified attributes that act as input data for the metrics. This process should be automated so that collection takes as little time as possible and the danger of becoming mired in amassing huge amounts of data is avoided. Careful planning in assessment and formulation helps avoid the gathering of too much data.
Analysis The physical collection of a metric does not, by itself, provide much information that helps in the decision- making process. Just as gross sales do not reveal the financial condition of an organization, the number of function points for a project does not explain how many person- months it will take to produce that project.
A metric must be statistically analyzed so that patterns are uncovered, historical baselines are established, and anomalies are identified.
Interpretation The function of interpretation is to attach meaning to the analysis, in other words, to determine the cause of the patterns that have been identified during analysis and then to prescribe appropriate corrective action. For example, if analysis shows that users are consistently dissatisfied with systems that require an ever-increasing number of user and analyst meetings, then it may not be a good idea to schedule more meetings. A more effective approach is to look for other reasons behind the dissatisfaction. Perhaps the meetings are unproductive, communication skills are ineffective, or business problems are being incorrectly identified. The interpretation of metric analyses furnishes a direction in which to start looking for different problems and solutions.
Validation As shown in Exhibit 3, validation occurs throughout each phase of the measurement program. It involves asking a set of questions to ensure that the goals of the measurement program are being addressed. For example, the results of the formulation phase should be validated with two key questions: 1. Are we measuring the right attribute?
2. Are we measuring that attribute correctly?
The following scenario illustrates how validation questions are applied. Assume that one of the overall goals of an IS organization is to improve the performance of user support. Using the G/Q/M approach, the IS organization establishes a goal of improving user satisfaction with IS support. A question that supports this goal is,
“What is the current level of user satisfaction with IS support?” IS personnel then formulate a questionnaire they believe measures the level of user satisfaction with IS
support. The questionnaire is used to collect data, which is analyzed and interpreted.
Analysis shows that there is no relationship between the type, amount, or level of IS
support and user satisfaction. Why not? It could be because there is no relationship between user satisfaction and IS support, or because the questionnaire was not measuring user satisfaction with IS support. Validating the questionnaire in the formulation phase helps ensure that it is measuring what it is intended to measure.
Measurement as a Passive Yet Iterative Process
Measurement is a relatively passive process; it is the actions people take because they are being measured and the new procedures that are developed based on the information generated from the measurement program that lead to improvement.
The goal of a measurement program is to provide information that can be used for continual improvement of the systems development process and its related products.
Although Exhibit 3 depicts the activities of assessment, formulation, collection, analysis, and interpretation as a sequential, circular process, they are interdependent and not performed sequentially. In the scenario presented in the preceding section, the IS organization found during analysis that the identified metrics were inadequate to determine any patterns. Such a result required validation of the metrics being used and a return to the formulation phase to redefine other metrics that would yield information more relevant to the goals.
MANAGING A MEASUREMENT PROGRAM
A measurement program is a long-term effort requiring the cooperation and coordination of a broad set of participants. One way to support the program is to establish a metrics infrastructure. A metrics infrastructure includes the following: