In the project, the basic data consists of the expected size at 32,000 LIS, the development time of 13 months, and the total effort planned at 87 person- months.
Exhibit 3 shows the main build (MB) plan (black square) compared against the telecom industry trend lines derived from a database of similar developments. The core planning data shown is also used to calculate the process productivity of the development team assumed by the supplier. In this case, the process productivity value is determined at 12.5. This is consistent with the expected industry average telecom development value of around 12.
Exhibit 3. Comparing the Planned Size, Time, and Effort Against Industry Reference Measures
Thus, the “health check” on the plan shows that it is in line with expected industry values. The plan now forms the baseline to track and report development progress.
Contractual Progress Data
Mandatory contract progress data is returned every two weeks, and is used to track progress and identify if there is any risk of slippage. The progress data is used to perform variance analysis (a form of statistical control) against the baseline plan.
The progress data consists of:
§ Staffing: how many people are allocated to the project
§ Key milestones passed: for example, program design complete, all code complete
§ Program module status: if it is in design, code, unit test, integration, or validation
§ Program module size when the code is complete
§ Total code currently under configuration control
§ Software defects broken down into critical, severe, moderate, and cosmetic
§ Number of planned and completed integration and validation tests This progress data is essential for the management of software development.
Without this basic data, the development is out of control.
Tracking, Reporting, and Forecasting Completion
At each reporting period, the progress data is used to determine the status of the project against the baseline plan. Advanced statistical control techniques use the data to determine if there is significant variance against the plan. If significant variance is found, then weighting algorithms enable the new completion date to be forecast, as well as forecasting the outstanding data to complete. This can include code production, defects, and tests. [6]
Exhibit 4 shows the situation in the development project after nine months.
Exhibit 4. Variance Analysis and Forecast to Complete
Plan
Actual/Forecast Difference
Elapsed months
13.93
13.93
0.00
Aggregate staff
2.00
2.78
0.78
Total cumulative effort (person- 96.08
101.47
5.39
months)
Total cumulative cost (thousands 2802.00
2959.00
157.00
of Netherlands Guilders)
Size (thousands of effective source 35.39
35.38
-0.01
lines of code)
Total defect rate
3.00
5.00
2.00
Total cumulative normal defects
209.00
187.00
-22.00
Total mean time to defect (days)
7.22
4.77
-2.45
Productivity index
12.00
11.60
-0.40
MBI
2.20
2.10
-0.10
Defect behavior is of particular interest because it is following the expected theoretical curve with occasional excursions. These rate variations are smoothed out in the accompanying cumulative curve. The corresponding mean time to defect (MTTD) is indicating high reliability at delivery.
[5 ]Greene, J.W.E., “Sizing and Controlling Incremental Development,” Managing System Development, November 1996.
[6 ]Putnam, L.H., Measures for Excellence: Reliable Software, On Time, Within Budget, Prentice- Hall, 1992.
[6 ]Putnam, L.H., Measures for Excellence: Reliable Software, On Time, Within Budget, Prentice Hall, 1992
KEEPING THE AUDIT TRAIL
The system used to track the telco development allows all plans and forecasts to be logged. At the end of the project, there is a complete history in terms of plans, progress at a given date, and the corresponding forecast.
This capability is shown in Exhibit 5 where the gray lines are plans, while the black parts of lines are actual progress data with the outstanding forecast shown as white.
Each entry represents a plan or forecast logged at a specific date.
Exhibit 5. Logged Plans, Actual Data, and Forecasts
Controlling Requirement Changes
At month 10 in the development, a change request was raised. Using the size baseline (this is confirmed by the actual code produced), it is practical to evaluate
the impact of such a request. To do this, the total size is increased by the size estimated for the change request. The new size is used to produce a forecast of the new delivery date, as well as the additional staffing needed. The results showed that an unacceptable delay would result; thus, it was decided to postpone the change to the next release.
FINAL TESTS AND ACCEPTANCE
Once the code is complete, then the main activity in development is to execute the integration and validation tests. These tests detect the remaining software defects and characterize almost all software projects
Post-Implementation Review: Keeping and Using the
History
Using the basic data described here provides visibility and control during development. In addition, it means that a complete history is available when the project completes. This is invaluable in understanding how the project performed, and to add to a growing database of (in this case) supplier performance.
Notes are also kept throughout development. These are used to investigate the history in the development, and to learn lessons for future projects.
CONCLUSIONS: THE (ALMOST) PERFECT PROJECT
In the case being discussed, all went according to plan until the final validation tests performed in the last two weeks. At this point, there was concrete evidence that high reliability would be achieved and delivery from the supplier would be on time with all the functionality required.
The final validation tests of this complex telecommunications software development included testing the interfaces to network equipment and systems. Unfortunately, the telco had not assembled one essential set of interface equipment required to perform the final validation tests. The result was that completion slipped by six weeks. This was, however, due to the telco — not to the supplier.
In all fairness, the telco did comment that the project had been among the best in its experience. The supplier had kept to the schedule and the budget, had delivered all the contracted functionality, and had achieved high reliability.
Applying the same six criteria to assess the telco’
s purchasing competence gives the
following results:
§ Telco corporate memory: Suppliers’
plans are kept and compared with
industry reference measures. Over time, detailed measures are built regarding suppliers, as their developments complete. These measures are also used to check new plans from the same suppliers.
§ Telco sizing and reuse: Each supplier is formally required to estimate software size, including uncertainty and reuse. This size data is used to assess the plan and to quantify the risk. The size data forms part of the contract baseline, and is used to track progress in each software module and control requirement changes.
§ Telco extrapolation using actual performance: The core progress data is used to determine progress against the contract baseline. Variance analysis determines if progress is within agreed-upon limits. If it is outside the limits, then new extrapolations are made of the outstanding time, effort, cost, defects, and actual process productivity.
§ Telco audit trails: The initial baseline plan is recorded, together with potential alternatives. All progress data, new forecasts, and the agreed-upon contractor plan and size revisions are logged.
§ Telco integrity within dictated limits: Each supplier proposal is evaluated against acquisition constraints of time, effort, cost, reliability, and risk.
Development progress is reviewed continuously to confirm that it is within the contract limits.
§ Telco data collection and performance feedback: The development history is captured using the core measures, including the initial proposal, contract baseline, progress data, forecasts, and revised plans. This history is used to continuously update the data repository of supplier performance, and highlight those that provide value for money.
Thus, one can see that the telco motivates suppliers to get and use the SEI core measures to their mutual advantage. This parallels the U.S. Department of Defense’
s
motivation in applying maturity assessments to suppliers.
The telco is concerned with getting commercial benefits from exploiting the SEI core measures. There are real bottom-line benefits to using the core measures, as illustrated here.
It is a pleasant change to describe a real development success. Indeed, use of the SEI core measures facilitates success. All too often, software case studies are based on disasters, many of which could have been avoided by actively using the SEI core measures.
NOTES
1. Carleton, A.D., Park, R.E., and Goethert, W.B., “The SEI core measures,” The Journal of the Quality Assurance Institute, July 1994
2. Kempff, G.W., “Managing Software Acquisition,” Managing System Development, July 1998
3. Humphrey, W.S., “Three Dimensions of Process Improvement. Part 1: Process Maturity,” CROSSTALK The Journal of Defense Software Engineering, February 1998
4. GAO Report to the Secretary of Transportation: Air Traffic Control GAO/AIMD-97-20.
5. Greene, J.W.E., “Sizing and Controlling Incremental Development,” Managing System Development, November 1996
6. Putnam, L.H., Measures for Excellence: Reliable Software, On Time, Within Budget, Prentice-Hall, 1992
For further information on the practices described here, refer to Lawrence H. Putnam and Ware Myers, Industrial Strength Software: Effective Management Using Measurement, IEEE Computer Society Press, Los Alamitos, CA, 1997.
Chapter 13: Does Your Project Risk