New Directions in Project Management by Paul C. Tinnirello

A well-structured portfolio assessment should lay the groundwork for this ongoing management approach. In addition to the 4R profile, the assessment should address other key questions:

§ How do end-user and formal IS-supported solutions interact?

§ How do current IT assets support fundamental business goals and processes?

§ What is the transition and development approach for each application in the portfolio?

§ What is the life cycle condition and investment posture for each application?

§ How does the applications strategy fit with the technical infrastructure?

§ Where should IS focus its spending?

Answering these questions provides the foundation for a new relationship that aligns the IS function and its business partners around business goals. Portfolio assessment becomes the first step in a fundamental shift away from the previous pattern by the IS function of responding to a stream of requests and associated expenses driven by demands for change to existing systems or proposals. Instead, the IS function builds anew toward managing the evolution of a series of IT assets. In this new approach,

IS managers evaluate changes against the state of IT assets and the business processes they support.

The assessment itself represents a high-level plan for the development of IT assets; at the very highest level, it is a conceptual architecture for the corporation. A relevant parallel to this high-level plan is found in the model used for citywide planning and construction. Allowing for adaptations resulting from the intangible nature of computing, business processes, portfolio strategy, technical infrastructure, and code inspections all have simple equivalents in the basic planning and construction disciplines of the model (e.g., zoning plan, infrastructure plan, design approval, various permits, code inspections, and maintenance regulations). What the IS function typically lacks is the discipline of the process, the role of a city planning department, and the maintenance regulations for ongoing upkeep.

ADJUSTING TO HYBRID COMPUTING

Some IS organizations narrowly view the legacy challenge primarily in technical terms. In these times of hybrid computing, IS managers must navigate between sharply contrasting world views: relational versus hierarchical, flexible versus rigid, object-based versus procedural, distributed versus centralized, open versus closed.

The days of the single paradigm are gone, and the accelerating pace of business and technological change requires the IS function to accommodate multiple architectures, languages, and platforms. In response, successful companies are rethinking their fundamental approaches to integration, planning systematic value recovery from legacy assets, and devising comprehensive transition strategies.

Rethinking Integration

Legacy Overintegration The typical business application has increased in size by an astounding 5,400 percent since the start of the 1980s. A typical mission-critical integrated application has grown to include 1.2 million lines of code — assembled by stringing together what would have been 20 different applications 15 years ago.

These numbers begin to illustrate the challenge posed by legacy overintegration.

Spaghetti code — the dominant challenge 15 to 20 years ago — has been replaced by spaghetti integration. Many architectures present numerous opportunities for uncontrolled interactions among applications, programs, code, and data. Structured programming — a widely accepted step forward over spaghetti code — offers some lessons for spaghetti integration. It stresses tight cohesion (i.e., keeping highly related functions together) and loose coupling (i.e., minimal connections between functions).

Unfortunately, in most cases of spaghetti integration and integrated shared databases, the opposite is true. Legacy applications are usually characterized by loose cohesion, with functional logic such as product edits and customer data sprinkled through numerous programs and applications. The applications are tightly coupled both through shared databases, redundant databases, and interface files and through uncontrolled interactions between numerous fields in countless programs.

Shifting to Component Architectures Component architectures replace shared data integration with tight cohesion and controlled coupling — practices that have much in common with object-oriented design and analysis. These architectures shift

the focus to standardized interfaces and construction guidelines. In addition, communications mechanisms based on the use of components complement the traditional focus on applications and data. Component architectures use such items as desktop integration, a software message bus, remote data access, and data warehouses as building blocks to help applications cooperate through standardized interfaces.

Although the idea of components is readily understandable in terms of new development, it can also be applied to legacy applications. Some organizations are now viewing these applications as reusable components that can be incorporated into new development using object-oriented techniques. Building on component-based message-driven architectures, IS professionals can use a variety of techniques to avoid plunging into the heart of legacy systems and forcing in new levels of complexity. Many of these techniques are now becoming well established, as the following sections on value recovery and transition strategies will discuss.

The shift to component architectures also supports two concrete commitments —

reuse and maintainability — that increase the value of IT assets. Reuse increases the value of an asset by reducing future development costs. Maintainability increases value by allowing an asset to evolve as the business changes. Both significantly reduce costs by focusing investment on fewer, more flexible assets. Breaking applications into controllable, standardized components is as important to maintainability as it is to reuse.

Recovering Value from Legacy Assets

Legacy applications are storehouses of immense business value, even if that value can be extremely difficult to exploit. The value can be classified as falling into one of three main areas: data, processing logic, and business rules.

Fortunately, new strategies, techniques, and tools are emerging that let IS managers and their staffs reengineer, recondition, coexist with, or extract value from existing applications. These approaches also move legacy assets closer to the architectures that underlie new development.

Transition Strategies for Legacy Assets

Several fundamental techniques are now emerging as the basis of a value recovery program in a hybrid environment. As the following sections illustrate, most of these tactics rely on breaking large applications into smaller components and then reengineering or reconditioning them.

§ Reverse Engineering — Automated tools help raise a system to a higher level of abstraction by, for example, deriving a system specification or requirements model from existing code and data. This technique may create a new baseline for incorporating enhancements and enabling future code regeneration. It also facilitates traditional maintenance by enhancing developers’

understanding of the system.

§ Package (i.e., Components) — Standard COTS (commercial off-the-shelf) packages such as word processors, spreadsheets, work-flow engines, and graphics libraries can provide critical components for hybrid solutions.

§ Components — Partitioning legacy systems or legacy programs into smaller components lets IS staff phase out or replace a system one piece at a time.

Some code analysis tools help with this task, but the conceptual commitment to modularity and reuse is more important than particular tools.

§ Rationalization and Restructuring — Cleaning up existing code by eliminating redundancy, instituting standards, improving structure, and updating documentation simplifies maintenance and enhancement. This is frequently the first step in a legacy strategy.

§ Conversion and Rehosting — Rehosting involves moving legacy applications —

untouched — onto client/server platforms. This approach not only promises potentially lower costs but also enables multiple application components to be more seamlessly integrated into a solution for the business user.

§ Architectural Layers — Layering separates the various components of an application, such as presentation/user interface, application logic, data access level, and communications.

§ Wrappering — Creating a software wrapper to encapsulate and modularize legacy components enables them to coexist and communicate with object-oriented components. A function server lets the legacy code send and receive object-oriented messages. In this way, wrappering positions legacy code to provide continuing value and to be reused in future systems.

§ Forward Regeneration — Automated tools help developers regenerate code based on modified higher-level abstractions rather than modifying the code directly. This technique may be used after reverse engineering, or if the original system was developed through code generation.

§ Surround — Creating additional functionality and data components around the legacy system, but without modifying the system itself, allows legacy components to be phased out as the surrounding components supplant the legacy functions.

§ Data Warehouse — Whereas surround strategies put new functionality in front of legacy systems and integrate at the desktop level, warehouses integrate the data behind legacy applications.

§ Maintenance and Enhancement — As the section on transforming the IS

organization will illustrate, leading practitioners are rethinking the traditional approach to managing the evolution of legacy assets even in the areas of maintenance and enhancement.

§ New Development — Best-practice models and capability maturity assessment help chart a course to better results and productivity in new applications development.

§ Package Replacement — Software packages often replace a legacy system, especially if it is used for nondifferentiating basic support applications. COTS

application packages may also be used in a surround approach.

§ Outsourcing: Long-Term or Transitional — Because it can be cost-effective under the right circumstances to identify a clearly definable task and

outsource it, IS managers should consider outsourcers in the enhancement, redevelopment, or replacement phases of a legacy strategy.

These transition techniques are certainly not stand-alone mechanisms. Not only is there overlap among the various techniques, but they also interact with each other to produce more finely tuned results. IS managers should carefully assess the techniques in a given situation as part of the overall project plan.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115

Leave a Reply 0

Your email address will not be published. Required fields are marked *