jim.shamlin.com

Quality Assurance Testing

Quality assurance testing is about making sure that the site works as intended, error-free, on all the platforms that its audience is expected to use to access it. The task here is to find and fix defects before releasing the site for public consumption.

Of particular importance, design and user interface decisions have already been made and tested at this point. Any feedback of that nature should be noted for future consideration, but at this point, the key is making sure that it works, and making changes to address any "late" suggestions is at the risk of breaking something else.

The author indicates there are "several kinds of testing," but discusses only the following:

Testing should be based on a detailed plan: testers should have a spreadsheet with thee columns - the page, them item, and the expected behavior - and a fourth column to indicate whether it worked as expected when they rested it.

Additionally, testing should be done to catch errors from nonstandard behavior: if the user submits a form with a field blank, or the wrong kind of data entered, or data in the incorrect format.

Typically, there are three classes of errors: design flaws, content errors, and software errors. The author then names a fourth error, called a "mechanical error," where something happens such as a browser crash.

When a tester finds a defect, he must document it in sufficient detail that another person can reproduce the defect for debugging. The conditions under which the error occurred should also be documented.

Moreover, once a defect is fixed, that interface and any other file that was touched in the process of repairing the problem should be re-tested to confirm that the repair has been effective and that no other defects were created when the files were altered.

Having "zero defects" is an ideal goal, but is not entirely realistic. It may be necessary to launch with a few minor defects - but to do so, the list of bugs must be prioritized by importance, and the decision must be made as to what defects can be corrected after launch. The author suggests a two-component score that rates the "seriousness" of the error (1-10) and the difficulty of repairing it (1-10) and sorting the list accordingly (most to least serious, with items of identical value ranked by easiest to hardest fix).

As to who tests a site: it should not be those involved in building it. Developers test recursively as they go along, and may have developed patterns of behavior that will prevent them from finding errors that users will. Generally, there should be separate personnel involved in testing, which can be done internally or externally (several companies have sprung up that specialize in testing Web sites).

There is also the concept of beta testing, letting some small subset of customers use the site, with the knowledge that they are being given early access in order to help identify errors. EN: in the present day, it is not unusual to follow the Microsoft model, and release buggy software to the public, and fix problems in version 1.1

Finally, the author urges project managers to be defensive about testing. It is often perceived as being of little value, when in reality it is critical to maintaining the reputation of the site.


Contents