1.5.3 Successful tests detect faults
As the objective of a test should be to detect faults, a successful test is one that does detect a fault. This is counter-intuitive, because faults delay progress; a successful test is one that may cause delay. The successful test reveals a fault which, if found later, may be many more times costly to correct so in the long run, is a good thing.
1.5.4 Meaning of completion or exit criteria
Completion or exit criteria are used to determine when testing (at any stage) is complete. These criteria may be defined in terms of cost, time, faults found or coverage criteria.
1.5.5 Coverage criteria
Coverage criteria are defined in terms of items that are exercised by test suites, such as branches, user requirements, and most frequently used transactions etc.
1.6 THE PSYCHOLOGY OF TESTING
1.6.1 Purpose
The purpose of this section is to explore differences in perspective between tester and developer (buyer & builder) and explain some of the difficulties management and staff face when working together developing and testing computer software.
1.6.2 Different mindsets
We have already discussed that none of the primary purposes of testing is to find faults in software i.e., it can be perceived as a destructive process. The development process on the other hand is a naturally creative one and experience shows that staff that work in development have different mindsets to that of testers.
We would never argue that one group is intellectually superior to another, merely that they view systems development from another perspective. A developer is looking to build new and exciting software based on user's requirements and really wants it to work (first time if possible). He or she will work long hours and is usually highly motivated and very determined to do a good job.
A tester, however, is concerned that user really does get a system that does what they want, is reliable and doesn't do thing it shouldn't. He or she will also work long hours looking for faults in software but will often find the job frustrating as their destructive talents take their tool on the poor developers. At this point, there is often much friction between developer and tester. Developer wants to finish system but tester wants all faults in software fixed before their work is done.
In summary:
Developers:
Are perceived as very creative - they write code without which there would be no system! .
Are often highly valued within an organization.
Are sent on relevant industry training courses to gain recognized qualifications.
Are rarely good communicators (sorry guys)!
Can often specialize in just one or two skills (e.g. VB, C++, JAVA, SQL).
Testers:
Are perceived as destructive - only happy when they are finding faults!
Are often not valued within the organization.
Usually do not have any industry recognized qualifications, until now
Usually require good communication skills, tack & diplomacy.
Normally need to be multi-talented (technical, testing, team skills).
1.6.3 Communication b/w developer and tester
It is vitally important that tester can explain and report fault to developer in professional manner to ensure fault gets fixed. Tester must not antagonize developer. Tact and diplomacy are essential, even if you've been up all night trying to test the wretched software.
1.6.4 How not to approach
Tester: "Hey Fred. Here's a fault report AR123. Look at this code. Who wrote this? Was it you? Why, you couldn't program your way out of a paper bag. We really want this fixed by 5 o'clock or else."
We were unable to print Fred's reply because of the language! Needless to say Fred did not fix the fault as requested.
Exercise
Your trainer will split you into small test teams. One of you will be the test team leader. You have found several faults in a program and the team leader must report these to the developer (your trainer). The background is that your team has tested this program twice before and their are still quite a lot of serious faults in the code. There are also several spelling mistakes and wrong colors on the screen layout. The test team is getting a bit fed up. However, you have to be as nice as possible to the developer.
1.6.6 Why can't we test our own work?
This seems to be a human problem in general not specifically related to software development. We find it difficult to spot errors in our own work products. Some of the reasons for this are:
We make assumptions
We are emotionally attached to the product (it's our baby and there's nothing wrong with it).
We are so familiar with the product we cannot easily see the obvious faults.
We're humans.
We see exactly what we want to see.
We have a vested interest in passing the product as ok and not finding faults.
Generally it is thought that objective independent testing is more effective. There are several levels of independence as follows:
Test cases are designed by the person(s) writing the software.
Test cases are designed by another person(s).
Test cases are designed by a person(s) from a different section.
Test cases are designed by a person(s) from a different organization.
Test cases are not chosen by a person.
The discussion of independence test groups and outsourcing is left to another section.
1. 7 RE-TESTING AND REGRESSION TESTING
We find and report a fault in LOG 3, which is duly fixed by the developer and included in the latest release which we now have available for testing. What should we do now?
Examples of regression tests not carried out include:
The day the phones stopped. .
LAS failure on 4th November (perhaps)
Ariane 5 failure.
Whenever a fault is detected and fixed then the software should be re-tested to ensure that the original fault has bee successfully removed. You should also consider testing for similar and related faults. This is made easier if your tests are designed to be repeatable, whether they are manual or automated.
Regression testing attempts to verify that modifications have not caused unintended adverse side effects in the unchanged software (regression faults) and that the modified system still meets requirements. It is performed whenever the software, or its environment, is changed.
Most companies will build up a regression test suite or regression test pack over time and will add new tests, delete unwanted test and maintain tests as the system evolves. When a major software modification is made then the entire regression pack is likely to be run (albeit with some modification). For minor planned changes or emergency fixes then during the test planning phase the test manager must be selective and identify how many of the regression tests should be attempted. In order to react quickly to an emergency fix the test manager may create a subset of regression test pack for immediate execution in such situations.
Regression tests are often good candidates for automation provided you have designed and developed automated scripts properly (see automation section).
In order to have an effective regression test suite then good configuration management of your test assets is desirable if not essential. You must have version control of your test
Documentation (test plans, scripts etc.) as well as your test data and baseline databases. An inventory of your test environment (hardware configuration, operating system version etc.) is also necessary.
1.8 EXPECTED RESULTS
The specification of expected results in advance of test execution is perhaps one of the most fundamental principles of testing computer software. If this step is omitted then human subconscious desire for tests to pass will be overwhelming and tester may perhaps interpret a plausible, yet erroneous result, as correct outcome. .
As you will see when designing test using black box and white box techniques there is ample room within the test specification to write down you expected results and therefore no real excuse for not doing it. If you are unable to determine expected results for a particular test that you had in mind then it its not a good test as you will not be able to (a) determine whether it has passed or not and (b) you will never be able to repeat it.
Even with a quick and dirty ad-hoc test it is advisable to write down beforehand what you expect to happen. This may all sound pretty obvious but many test efforts have floundered by ignoring this basic principle.
" The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong does go wrong it usually turns out to be impossible to get at or repair."
--Douglas Adams
No comments:
Post a Comment