4: Planning Your Test
Testing is a critical step to ensure that the solution will operate as designed, in order to identify and address potential problems when the solution is released to production, where any malfunction can cause serious problems.
"Selling" the Need to Test
Many organizations see little point in it: they expect that they system was "tested" by those who did the development work, and seek to save time and money by skipping or foreshortening the testing step.
It's largely a political problem, and the author recommends "selling" testing as a method of avoiding the impact of problems in production (which are mote costly, more publicly embarrassing, create an emergency situation, etc,) and making sure that time and resources are allocated to testing in advance.
What Is the Purpose of Planning a Test?
In order to be effective, testing must be methodical: you must assess the performance of a system under normal operating conditions as well as testing for a wide array of foreseeable anomalies.
It's also noted that one of the greatest risks to a project are change-requests - the "little extras" that seem to arise as the project moves forward and are bolted on without much regard for how the impact the core operations. The author goes into quite a lot of detail about this.
A test is based on "cases" or "scripts" that define what is being tested, how it is being tested (sequence of actions), what data is to be used, and the expected outcome. After the test is run, the results are "success" (the expected outcome was achieved) or "failure" (the expected outcome was not achieved, or there were side effects that were unforeseen) and in the latter case, details as to the difference between expected and actual outcomes.
Testers should be as objective and unbiased as possible. Ideally, testing should be done by an independent group, who has not been directly involved in the project and who do not have conflicts-of-interest over its success.
Kinds of Tests
The author provides a fairly extensive list of the kinds of tests that can be performed:
- A simulation test - Ensures that the test environment used for additional testing is an accurate reflection of the actual environment in which the application will eventually live.
- A unit "black box" test - Takes an element out of context to ensure that its internal functions are sound and robust.
- A function test - Performed on a single "element" to ensure that it operates properly - takes input, performs operations, and returns output - as needed for the system
- A user readiness test - The system is tested with a sample user who is asked to perform a task, to determine whether the user understands how to use the system and determine training needs (EM: This is commonly called a "usability test")
- A "user experience" test - The system is tested to determine how well it "interacts" with the user: is the user frustrated by the wait time, can he navigate among the screens, etc. (EN: this is also user-related, but makes an important distinction - in that it's possible for a user to be able to utilize the system to perform a task, with a clear indication of succeed or fail, but another consideration is how they "feel" about the experience, especially when usage of a system is voluntary).
- Integration Test - Places all the elements (or a significant set of them) in a test environment and examines the interactions between them, generally as a follow-up to the function test (each piece works well in itself, now how do they work together?)
- Regression Test - Tests a new element against previous versions of the environment to test for bugs and flaws that may arise if other elements are removed or downgraded.
- Unknown Element Test - A bug is deliberately placed in one of the elements in order to test the system's ability to recognize the source of the bug and take any required actions to work around it.
- Load Test - The elements are tested to see what volume of demand they are capable of withstanding. Typically, the system is loaded until it "breaks" as a way of assessing its maximum capacity, and when a breakdown occurs, the source is traced to an element to determine if the weak links can be reinforced. This is often called a "stress test" when performed on a single element.
- The operational test checks the system for "operational effectiveness and operability" (EN: whatever that means) with an eye toward how it will impact the day-to-day operation of the business and the staff.
- The change-implementation test is done when a system is being upgraded, to test how well the old system works when new components are swapped in.
- The configuration test alters configuration parameters to determine what the impact to the system may be if settings are changed.
- An environment test pertains to the effect of environmental conditions (humidity, temperature, pressure, and power fluctuations) to determine what damage may be done to the hardware, and what malfunctions might occur in the system as a result.
- Parallel/switchover test - Tests the new solution alongside an existing solution to determine the differences between the two
- Degradation Test - The system is tested for its reaction when individual elements are disabled.
- Recovery Test - The system is tested to determine if it can gracefully regroup when an element (or the entire system) is temporarily disabled, then brought back online.
- Shutdown/startup tests - Reboots the system to ensure that the shutdown process goes smoothly (and does not damage anything) and to determine what, if anything, needs to be done when the system, is started up.
- Security Test - How well the system defends against a hostile attempt to gain access or impede the performance of the system
- DOS Test - Based on a form of Internet attack, how well the system responds to a sudden spike in demand, presumably created to intentionally overload the system