Automating Cluster and Class Test Execution: Useful for Smalltalk Applications?

Automating Cluster and Class Test Execution:
Useful for Smalltalk Applications?

Author: Gail C. Murphy, University of Washington (gmurphy@cs.washington.edu)


Testing is an engineering activity that requires an organization to continually balance the cost of performing test-related tasks with the cost of fixing problems once the system is installed and in use. Some of the fundamental questions that any organization faces are:

At MPR Teltech, we faced the need to find answers to the above questions suitable for managing the development of a highly-available network surveillance system implemented in Eiffel and C++. The testing process we used included code walkthroughs, compilation diagnostics, system testing, and automated support for class and cluster test execution [1]. Some of the lessons learned from applying this testing process, in particular the value of automating cluster and class tests, may be relevant to those considering approaches for testing Smalltalk applications.

We tested clusters (and classes) as follows. Prior to the implementation of a cluster (a group of related classes), an engineer would prepare a cluster test plan outlining scenarios of use of the cluster, and would develop test scripts for a proprietary ACE (automatic class executor) tool. The test scripts defined a number of test cases; each test case involved the sending of a stream of messages to one or more objects with the outcome of the message sends checked against an expected value. Upon completion of implementation, the ACE tool was used to execute the test scripts against the implemented cluster (and classes). The tool reports unexpected values and exceptions that occur during test execution. The test scripts were also used for regression testing. One unique feature of the ACE tool is its support for executing test scripts defined for ancestors against descendant classes.

We found this testing process beneficial for reducing a number of kinds of errors that had previously crept through to system testing and beyond. For example, attributes used prior to initialization. One reason the process was beneficial was that it enabled the definition and execution of a larger number of tests than our previous approaches. The test execution scripts were particular useful, playing three roles. First, the scripts helped automate some of the testing tasks. Second, the scripts provided documentation on the intended use of the class/cluster; this was helpful to engineers trying to understand, evolve or reuse the class/cluster. Third, the scripts impacted the approach taken by the engineers to the design, implementation and testing of a class/cluster; more attention was paid to the testability of the class/cluster.

Determining whether our experience, and other literature available on testing systems implemented in compile-time statically-checked languages ([2],[3],[4], etc.), is transferable to systems implemented in Smalltalk will depend, in part, on the kinds of errors appearing in the products produced by an organization. A useful initial first step in assessing the transferability of this experience and research may be an analysis of the errors that occur in products implemented in Smalltalk. If the kinds of errors that occur include unexpected interactions due to inheritance, inappropriate interaction sequences between objects, or uninitialized access to attributes, it is likely that an automated test executor based on scripts may be useful. A script-based test executor similar to the one described above could be implemented and trialed at relatively low-cost in a Smalltalk-based development.

References

[1] G.C. Murphy, P. Townsend and P. Wong. "Experiences with Cluster and Class Testing", Communications of the ACM, 37(9), 1994.

[2] D. Hoffman, J. Smillie, and P. Strooper. "Automated Class Testing: Methods and Experience", Proceedings of First Asia-Pacific Software Engineer Conference, 1994.

[3] Doong, R.K. and Frankl, P.G. "The ASTOOT Approach to Testing Object-Oriented Programs", ACM Transactions on Software Engineering and Methodology, April 1994.

[4] Rothermel, G. and Harrold, M.J. "Selecting Regression Tests for Object-Oriented Software, International Conference on Software Maintenance, Sept 1994.