Smalltalk Test Manager

Smalltalk Test Manager

Author: John Brant, University of Illinois (brant@cs.uiuc.edu)

NOTE: This article was provided untitled; title added for HTML by editor


Introduction

Like a lot of Smalltalk groups, the UIUC Smalltalk group has felt the need for tools to support testing better. We need better tools for managing regression tests and for testing interactive programs. Testing tends to be ad hoc, and we tend not to keep track of old tests nor the results of testing.

In an attempt to solve this problem, we have been developing a test manager for Smalltalk.

Smalltalk Test Manager

The Smalltalk Test Manager is a testing package that helps developers record and manage a set of tests. It records Smalltalk code that exercises the test case and also code to compare the results. Once recorded, tests can be evaluated automatically without any user interaction. Although it was original developed by Gordon Davis as part of his MS thesis, I have extended it so that user interfaces can be tested and test suites can create objects that are used by its test.

The test manager has three basic types of objects: tests, test suites, and test classes. Tests are the only objects that actually test code; the other are simply groupings to aid the test developer.

Tests are broken into a test script and result comparison script. Results can either be an object returned by the test script or a signal raised during the execution of the script. Once a test is executed, the test manager saves the results so that they can be looked at later. The test manager also saves method coverages and profile information for each test. The coverage information can help the tester determine how effective the tests are. Although the profile information may not be useful to help find bugs in the program's logic, it can help find use ability bugs where the program runs too slow.

Tests can also have user interface scripts that were recorded using the Smalltalk Recorder application. The test can first open a window of the application to test, and then play the user interface script. Once the script finishes, the test can verify that the application was modified correctly. Although these scripts can be played back on any systems, the recordings are not portable since they are recorded at the device level. Differences in fonts and screen sizes will cause their play back to fail if tried on another machine. Furthermore, since the tests are recorded at the device level, widget events such as "OK button pressed" are not recorded, instead it records an event such as "left mouse button pressed at position (50,100)."

Test suites are used to group similar tests and test suites so that the tests can be managed easier. In addition to management issues, test suites can also create and initialize objects that are used by the tests it contains. For example, a set of tests on file access, might be part of a test suite that opens a file before each test is executed and closes it after each test is executed.

Test classes are similar to test suites, except that they group all of the tests for a class. Once tests have been defined in a test class, they are inherited by all subclass test classes. This allows fewer tests to be entered that only change the class of the object tested.

Future Directions

Although the test manager is a usable tool, there are many enhancements that can be made to make it better. One improvement would be to assist the user to write a script to compare test results. Currently, the user must create a script that checks the result returned. While this might be as simple as testing equality, it might require going through several objects using accessor methods or the instVarAt: method. Another improvement would be to allow the user to specify that certain parts of an object could not be assigned or read. If they were assigned during the test execution, then the test would fail. Still another improvement could be made to the user interface testing. Instead of recording a user interface script, the tool might record the messages sent to the model. During play back, these messages could be sent to the model without the using the user interface objects (views and controllers). Also the user interface tool might be modified to record widget events instead of device events so that user interface tests could be more portable. A final addition to the test manager might be an analysis tool that checks for common Smalltalk bugs such as defining an #= method, but not #hash. Although this differs from the other parts of the testing tool in that it tries to catch bugs before they are executed, it can be a beneficial tool that can catch some hard to find bugs.