estimates that are not based on what is required or pressure to not have estimates at all.
PROJECT TRACKING
As with all development projects, essential to avoiding or managing client/server development pitfalls is effective project management. The elements listed below are used to identify where the project is, what is left, and the amount of effort remaining.
§ Defining tasks: Development tasks should be defined at a size that is small enough to be easily tracked and meaningful. The project manager can effectively manage a project if there are specific deliverables with clearly defined hours and frequent due dates. Large tasks with ambiguous deliverables make it difficult to know if the project is in trouble in time to effectively manage the pitfalls. Task interdependencies and assignment of responsibilities are particularly important for projects with multiple related teams where it may be difficult to determine who is responsible for what.
§ Estimating hours required: This should be done by someone who is experienced with what is required — hopefully the developer that will be performing the task. This would provide some ownership or commitment to task completion.
§ Estimating percentage of completion: This can be an inaccurate guess if based on the amount of work that has already been expended to complete a task. It should be based on defined deliverables such as number of tasks, screens, or reports completed.
§ Timekeeping: Timekeeping is frequently not used effectively. Many developers do not regularly record their time or keep an accurate estimate of the hours spent. This makes it difficult to determine the project status. In addition, the failure to record all hours for this project may cause other projects to be underestimated if the recorded hours are used for future estimates.
ISSUE TRACKING
Issue tracking can be used to refine project requirements by documenting and resolving decisions that were not contemplated during the original requirements definition. The issues log is also a good vehicle for tracking outstanding problems and ensuring that they are resolved before the system is implemented into production. A common pitfall with client/server systems is the lack of stability due to software incompatibilities, network errors, and weaknesses with the database handling concurrent updates. Issues should be weighted in severity from “show stoppers” to “nice enhancements” to prioritize the development effort. The owning user of the system should be the one to determine if an issue has been resolved, as there as a tendency for developers to claim resolution prematurely. As with any problem log, the issue log should contain who identified the issue, the date the issue was identified and communicated, severity, a description of the issue, and if resolved, the resolution text. This can also serve as an audit trail of the decisions made.
Issues should be retained after they are resolved to be used for future trending.
Trend analysis should be performed to track training issues, as well as problems with hardware, operating systems software, and other application software. If each error is logged, the issues log can also be used to track the overall stability of the system.
The issues log can be used to diagnose problems by pinpointing the situations where the problem occurred. The problem information can also be useful in obtaining vendor assistance in problem resolution by providing clear evidence of correlation between problems and vendor products.
DEVELOPING SKILLS WITH TECHNOLOGY AND TOOLS
On-the-job training is not the way to learn new client/server development tools and techniques. A developer should certainly take classroom or computer-based training (CBT). However, developers should not embark on large-scale projects without first having successfully completed small projects. This would reduce project risk by allowing the developers to prove themselves on a smaller scale and give them the ability to more accurately estimate the effort involved. Project managers should also be trained in managing progressively larger projects focusing on multiple teams, task interdependencies, and multiple users.
On larger projects with new technologies, there can be many people with different levels of expertise attempting to make decisions. There are many levels of knowledge. This can range from what a person read in a magazine, to what they heard from someone else, to what they know from training, to what they know from working with a system or past development experience.
The first three levels of knowledge are fairly weak but pretty common. People’
s roles
should be managed, based on a recognition of their level of knowledge to ensure that tasks are appropriately assigned, estimates are reliable, as well as that the decisions made and directions taken are sound. Reference checks should be made for new employees and outside consultants who claim to be “experts” to verify their level of expertise.
SECURITY
A successful security implementation can be difficult in a client/server environment due to the many processing layers that must be secured:
§ Client workstation. Historically, this has been a personal computer that has weak controls restricting who has access to programs and files. However, with the introduction of operating systems such as Microsoft’
s Windows NT
Workstation, the controls available are rivaling the level of security available on a mainframe.
§ Application. This level of security typically controls the menus and fields that a user is able to access. The levels of access are typically read, update, and delete.
§ Network. This deals with securing activity on the network. Tools such as network sniffers are available to read and alter data that is transmitted over the network. There are typically two types of network controls used to prevent inappropriate disclosure or alteration of data. The first is restricting access to segments or areas of a network. This is usually done with firewall systems or screening routers that restrict traffic based on source and destination addresses. Internet connections should be controlled by firewalls.
The other method for securing network traffic is encryption. This prevents the ability to read or alter data going across the network. At a minimum, passwords should be encrypted.
§ Server. Servers typically control who can log on to the network and who can access databases and files on the network. Server security is the most
common type of security used in a local area network. Access to the network is typically controlled through a userid and corresponding password. Access to files is then granted based on the assigned user or group id. Most servers provide for logging security administration and violation activity. In large client/server systems, a mainframe is performing the server function.
§ Database. The database system can also perform security functions, requiring a userid and password and then assigning access to data based on the user or group id. In addition, databases can log security administration and violation activity.
Coordinating multiple levels of security is difficult, and many systems introduce security weaknesses by ignoring access controls on certain platforms or scripting logons on platforms that can be easily circumvented. Another typical problem with client/server systems is that they are cumbersome, requiring multiple logons with multiple userids and passwords.
Ideally, the application should be designed with a single sign-on that controls access on the applic ation, workstation, server, and database systems, along with network controls that restrict access to the appropriate segments of the network and encrypt sensitive traffic.
TESTING
While the elements of the traditional quality assurance/testing process apply to the client/server environment, this environment contains unique challenges requiring more rigorous testing although developers may not take testing as seriously because it is “only a PC system.” The client/server systems development process should include test plans with expected result, actual result, and disposition of differences.
If the system requirements have been well defined, they can be used to develop the test plans. Testing should include all platforms, as well as the interfaces between them and the ability to handle concurrent users. In addition to handling multiple updates through concurrent connections, many client/server systems include the ability to operate without a direct network connection through database synchronization using a process called replication. This requires unique testing steps to verify that replicated additions, updates, or deletions are handled correctly through the replication process as well as working with the system operating in a multiple-user mode. Concurrent updates to databases (two people attempting to update the same record at the same time) can create database conflicts. How the system handles conflicts should be documented and managed by the application software or manual procedures.
Poor response time is often an issue with client/server systems. Bottlenecks can be corrected by increasing network capacity, tuning database queries, or optimizing the database design.
Client/server change management also creates unique challenges with version control. Progra mming code is typically distributed across multiple platforms as well as embedded within databases. While PC version control packages are frequently used, change management systems that include source/object synchronization are not as sophisticated as the systems used in the mainframe environment.
DEVELOPING DOCUMENTATION
While the goal of a client/server system is to be user friendly and provide online help functions, these systems should additionally have the traditional types of documentation available to operate, maintain, and use the system. The documentation requirements should include the following: