New Directions in Project Management by Paul C. Tinnirello

Exhibit 3. Cost Disparities between Best-Practice Data Centers and Industry Averages

Best

Industry Disparity

Practice Average

Annual Spending per Used MIPS

Hardware

$51,249 $89,701 1.8

Software

$17,802 $71,440 4.0

Personnel

$41,748 $115,454 2.8

Cost per gigabyte of disk storage per month

$109.91 $272.84 2.5

Cost per printed page

$0.0017 $0.0070 4.1

Total staff per used MIPS

0.70

1.93

Source: Defense Information Systems Agency, U.S. Department of Defense, 1993.

It should be emphasized that these figures (which are based on used rather than theoretical capacity) compare best-practice organizations with industry averages.

Many organizations have cost structures much higher than the averages cited in this study. Capacity utilization, along with the effects of consolidation, rationalization, and automation, suggest that efficiency is in fact the single most important variable in IS costs. Clearly, the best way to reduce IS costs for any platform is to increase the efficiency with which IS resources are used.

APPLICATION LIFE CYCLE

One of the major long-term IS cost items for any business is the staff needed to maintain applications. In this context, maintenance means ongoing changes and enhancements to applications in response to changing user requirements. Even organizations that use packaged software will need personnel to perform these tasks.

The typical software application experiences a distinct U-curve pattern of demand for changes and enhancements over time. Demand is relatively high early in the cycle as the application is shaken down. Change and enhancement frequency then decline, before increasing again at a later stage as the application becomes progressively less appropriate for user requirements.

The frequency of application change and enhancement is affected by change in such factors as organizational structures and work patterns. The level may remain low if the business operates in a reasonably stable manner. Because all applications age and eventually become obsolete, increases are inevitable. The main variable is how long this takes, not whether it occurs.

The application life cycle has important implications for IS costs. Once the shake-down phase is completed, a new application usually requires comparatively little application maintenance overhead. However, at some point maintenance requirements usually escalate, and so will the required staffing level.

Measuring application maintenance requirements for limited periods gives a highly misleading impression of long-term costs. Application maintenance costs eventually become excessive if organizations do not redevelop or replace applications on an ongoing basis. Moreover, where most or all of the applications portfolio is aged (which is the case in many less-sophisticated mainframe and minicomputer installations), the IS staff will be dedicated predominantly to maintenance rather than to developing new applications.

As an applications portfolio ages, IS managers face a straightforward choice between spending to develop or reengineer applications or accepting user dissatisfaction with existing applications. Failure to make this choice means an implicit decision in favor of high maintenance costs, user dissatisfaction, and eventually a more radical — and expensive — solution to the problem.

APPLICATION DEVELOPMENT VARIABLES

The cost of applications development in any business is a function of two variables: 1. Demand for new applications

2. Productivity of the application developers

Costs will decrease only if there is low demand and high productivity. In most organizations, demand for applications is elastic. As the quality of IS solutions increases, users’

demands for applications also increase. This is particularly the case for interactive, user-oriented computing applications. Early in the cycle after a major system change, user demand for these applications can easily reach exponential proportions.

If this effect is not anticipated, unexpected backlogs are likely to occur. More than a few IS managers who committed to reducing IS costs have been forced to explain to users that it is not possible to meet their requirements. Similarly, the productivity of applications development can vary widely. Some of the major factors affecting applications development productivity are summarized in Exhibit 4.

Exhibit 4. Factors Affecting Applications Development Productivity Proper Definition of Requirements

Application/Systems Design

Applications Characteristics

Applications Structure/Size

Underlying Applications Technologies

Applications Complexity

Functionality of Tools

Development Methodology

Match of Tools to Applications

Degree of Customization

Training/Documentation

Programmer Skills/Motivation

Project Management

Management Effectiveness

Development tools are an important part of the equation. Normally, third-generation languages (3GLs) yield the lowest levels of productivity, fourth-generation languages (4GLs) offer incremental improvements, and computer-aided software engineering tools, particularly those using rapid application development methodologies, perform best. Visual programming interfaces and the use of object-oriented architecture can also have significant effects. Productivity, in this context, is normally measured in terms of function points per programmer over time.

Productivity gains are not automatic, however. Tools designed for relatively small, query-intensive applications may not work well for large, mission-critical online transaction processing systems, and vice versa. Matching the proper tool to the application is thus an important factor in productivity. However, IS managers should be wary of vendor claims that the tools will improve productivity and reduce staff costs, unless it can be shown that this has occurred for applications comparable to their own requirements. Increases in productivity can be offset by increases in underlying software complexities (i.e., increases in the number of variables that must be handled during the programming process) and in degrees of customization.

Sophisticated user interfaces, complex distributed computing architectures, and extensive functionality at the workstation level have become common user requirements. However, multiuser applications with these characteristics are relatively difficult to implement and require realistic expectations, careful planning, and strong project management skills. Failure to take these factors into account is the reason for most of the delays and cost overruns associated with client/server initiatives.

New development methodologies and tools may alleviate, but not remove, problem areas. Regardless of the tools and methodologies used, effective requirements definition and management of the applications development process are more likely to be the critical factors in productivity.

TRANSITION COSTS

Costs involved in switching applications and databases from one system to another are commonly underestimated. Many businesses treat transition costs as a secondary issue and give them less scrutiny than capital investment or ongoing

operating costs. Even organizations that handle other aspects of the costing process with diligence often tolerate a great deal of vagueness regarding the time and expense required for transition.

This imprecision also extends to many claims of cost savings. Most of the figures quoted by vendors, consultants, and the media refer to purported savings in operating costs, not to net gains after transition outlays. Moreover, operating costs may be artificially low early in the few years following a change precisely because major one-time investments have been made in new hardware and software, with applications that are relatively new and require little maintenance.

Initial Installation of New Hardware and Software

Costs of the initial installation of new hardware and software are comparatively easy to quantify, provided that capacity has been properly estimated using workloads, service levels, and other criteria. If this has not been done, the organization may experience a sharp increase in costs above projected levels during the first year as the new system comes into production and actual requirements become apparent.

One-Time Services Outlays

Some one-time services outlays are usually required as well. These may range from large-scale conversions of applications and data, to the recabling of data centers and end-user environments, to retraining of IS and user personnel, along with the installation and assurance of system and applications software.

Many organizations use generic cost data supplied by consultants or vendors. This data can be highly inaccurate for the specific situation. Hard costs should be obtained to provide more precise data for planning purposes.

Length of the Transition Period

The length of the transition period has a major impact on costs. The actual time taken depends greatly on several factors. Organizations moving to new, relatively untried platforms and technologies need to allow for longer periods to shake down the new system and test its stability in the production environment. Depending on the size and requirements of the organization, as well as application workloads, transitions can take five years or more. A protracted transition period means that operating costs (e.g., software, hardware and software maintenance, personnel, cost of capital) are substantial even before a new system goes into normal operations.

Parallel operations costs (i.e., maintaining the existing system in production) can also increase if the transition period is extended.

All these factors make it important that IS managers set precise dates for the transition process, beginning with the start-up of a new system in test mode and ending with the shutdown of the old system. This approach (shown in Exhibit 5) allows for more accurate five-year costing.

Exhibit 5. Measurement Periods for Comparative Costing

COMPANY-TO-COMPANY COMPARISONS

Reliability of Case Studies For any business considering a major comp uting systems change, the experiences of others who have made similar changes should, in principle, provide useful input. However, few well-documented case studies exist, and those that do are not always representative. The lack of reliable information is most obvious for mainframe migration patterns.

Organizations that have replaced and removed mainframes entirely fit a distinct profile. In the majority of the cases, these organizations possessed older equipment and aging applications portfolios. Their IS organizations were characterized by lack of previous capital investment, poor management practices, inefficient manual coding techniques for applications development and maintenance, lack of automation, and just about every other factor that leads to excessive IS costs.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115

Leave a Reply 0

Your email address will not be published. Required fields are marked *