Many people look to test automation as an alternative to manual testing, but struggle to understand how a machine can offer the same scope and coverage as a real person. The short answer is: it can't, which is why we advocate a hybrid model at ProtoTest. Both people and machines have their place in testing.
It’s easy for a person to see a button is partially hidden, or the text is the wrong size. We've come so far in technology it doesn't seem like a stretch to expect a machine to 'see' the same problems as a person. However, while it’s possible to build an automated script that checks the size and location of each button, frame, and text element, doing so is incredibly complicated, and often causes more problems than it fixes. The application’s source code literally specifies how it should function and how it should look; validating every aspect of an application would require the same amount of code as the application itself.
Businesses want automation that provides both broad coverage and minimal maintenance. However, this is the crux of the issue: the more validation required, the more complicated the automation becomes, the more maintenance required. After all, if automation truly validated every aspect of an application, it should fail after any change to the application, including new releases. The goal then, is to create an automation suite that provides breadth of coverage without significant maintenance.
For this reason, most web automation suites focus on testing an application’s functionality, and don’t bother validating how it looks, at least initially. There are a number of automation tools that support image comparisons, so it is certainly possible to perform an image-based test. However, a number of things can change from run to run making the tests problematic. Eventually, the tests become broken, and end up repeatedly failing. So, how can we maintain a large suite of tests long term?
It’s easy to build a single test that runs today; it’s much harder to build a suite of tests that will run for months and years. Therefore, it’s much better to have a test that you can run consistently with very little maintenance than it is to have a large bulky test that covers everything but constantly breaks. So instead of testing for everything, ProtoTest uses our proven prioritization approach which focuses, first on “Smoke Tests” then “Acceptance Tests”. We verify a user can log in, and can perform an action, and log out. Automation is supplemented by a manual testing effort. As we like to say at ProtoTest, "Automation makes humans more efficient, not less essential."
Testers are not robots designed to test every possible iteration through an application. Just as a script is not able to sense meaning, infer context, express empathy or frustration as a person could. Rather than expecting testers to perform as robots, or script to perform as a person, accept both testers and automations have different strengths to bring to your testing project.
Accept that automation has limits. Accept that it will not catch every random odd defect, and will not catch whether a button is slightly off its intended position. Instead, imagine a world where, with the push of a button, you can validate every major user path through your application. When the major functionality is in working order, a very small amount of manual testing can validate the look of the application. The testing team can then focus on edge cases validating all the contextual information that a human brain is so good at doing.
Using this hybrid approach, we can reduce the testing window from days to hours. This allows businesses to release their product faster, becoming more agile. Accepting a minimum amount of risk allows for a cheaper, faster release cycle. Also, rather than living in fear of releasing a major defect, businesses can spend their time and money fixing minor problems and building new functionality.