Automated Testing: What Causes Defects to be Missed?

Posted by Dana Edmondson in Automation Automated Software Testing Life Cycle

Test automation is the use of software to conduct testing that requires no operator input. Regardless of how testing is executed, however, the foundation of testing is still the same. According to Glen Meyers in The Art of Software Testing, "Software Testing is the process of executing a program or system with the intent of finding errors." So then we may ask ourselves, "why do automated testers fail to deliver test scripts that find defects?" The answer to this question is multifaceted.

Mindset
The first aspect to consider is the mindset of many automated testers. Automated testers come from many backgrounds, such as previous developers or manual testers who sought to increase their visibility in the testing arena. However, regardless of backgrounds, many of these testers develop tunnel vision when creating test automation scripts. This mindset usually consists of testers who become so focused on coding the tests that they fail to see the big picture.

Automated test creation should be no different than writing a manual test case. The tester should walk through the requirement manually as they code the test. This ensures that the automated test script has the same outcome as a test executed manually. Essentially, the basics are being followed, the expected result is ensured, and the ROI is secured.

Failure to provide walkthroughs while coding can have many adverse effects. Perhaps there is an outstanding defect, and the program does not produce the expected result. In any case, the newly created test script, without proper walkthrough, only serves to test the current release and does not test the actual requirement at hand, making the test script fallible. Automated testers should therefore continue to stick to the basics, pushing away the notion to code beyond what is currently "on-screen", and retain a healthy mindset akin to a manual tester, keeping focus and liberating them from a tunnel vision approach.

Non-reproducible tests
The second aspect to the equation is non-reproducible tests. Automated testers must remember that when a defect is submitted, a developer will attempt to reproduce the defect. If an automated test is not reproducible, then the defect may be considered not reproducible and thus disregarded. In order to make automated tests robust, one must make sure that the automated tests are not only well maintained and continuously ran, but also that the defect can be reproduced prior to submission.

A good rule of thumb is to reproduce the defect three times before submission. If one cannot reproduce the anomaly, then a warning should be submitted and development notified of the find, but not submitted as a viable defect. A well-crafted automation framework will execute the test or set of tests (regression), capture and validate data during its execution, and provide the ability to return to baseline state of data prior to test re-run. If tests are non-reproducible, then validation and checkpoints are also compromised. Remember, reproducible automated tests have a large ROI when it comes to the re-creation of defects. A developer can then see in real-time the offending code and the execution steps that were taken.

Improper debugging
The third way automated tests fail to find defects is improper debugging. In a perfect world, every automated script would run correctly the first time, but since testers are in the habit of breaking things, there are days that even the easiest automated tests fail to execute properly. On these days, the best attribute a tester can have is the ability to track down an error within dozens or hundreds of lines of code. Many testers will simply dismiss the failed test, fix the test to suit the new "change" and continue with their day.

Others will automatically assume this is a defect and submit a defect request. One must remember though that debugging is not only the ability to fix one's own automation code, but also to see if a test is failing due to code error or due to a change in the application. Once a tester has determined through debugging that the script failure is due to a program change, a few other steps must be taken before the "change" or unexpected windows are considered defects.

Of course the first step is to check the system requirement this test links to, and make sure that no updates to the requirement have occurred. Next, the tester should also check with development to make sure a change has not occurred which has not been filtered to the testing department. If each of these steps returns with the same result (no changes have occurred), then a defect must be entered. Again, all steps must be followed in defect submission that a manual counterpart would take, including ensuring the ability to not only recreate the defect via your test script but also verify that the same defect happens manually.

Automation framework
The last and perhaps the most important obstacle which prevents test scripts from locating defects directly relates to the automation framework created. The inability to expand an automation framework often hinders the tester's ability to find defects. Many automated testers have a short goal of creating a regression test which includes positive tests to walk through the application and test each scenario and requirement. Often, a few negative tests are included for good measure; however the baseline regression test never evolves, or adds in new validation, checkpoints, or different navigation to meet the requirement's expectation, or new test scripts to test the boundaries of the program.

As an automated tester, one must remember that eventually everyone will know exactly how the automation scripts function, and often changes are made to the program that will never directly affect this code, making defect detection much harder. A good test automation team member must then maintain their current regression tests and elaborate on them, testing on different levels and areas of the application previously untouched. These test scripts are usually harder to write for the tester, and many in the automation world find that defects are not as predominant beyond the baseline. However, the ability to expand automation increases coverage and provides an expanded automation framework to manipulate and new goals to achieve.

Conclusion
Defect detection in automated script creation and execution is notably harder than manual defect detection. Automated testers must rely not only on the software program executing the tests, but also on their manual testing skills. More importantly, automated testers must know their testing audience to ensure that tests are not formulated and designed to always "pass" due to others knowing your "code". Defect detection is the unassuming art of automated testers who not only know to manipulate software to test a program, but also never forget the basics of quality assurance.


About the Author: Dana Edmonson is a certified technical lead for DeRisk IT Inc. She specializes in project management and test automation, including a large amount of experience with SmartBear's TestComplete.

Note: DeRisk IT is now known as DeRisk QA.