Wednesday, February 17, 2016

Assertions are bad

Yup, I said it.  We're all thinking it.  We've all had problems with them, but no one has come out and said it.  

Don't use assertions with selenium-webdriver.  Use the webdriver api's wait methods instead.   Many people struggle with unstable tests, and this is one of the primary culprits.  

The reason is simple.  Modern web applications use a large amount of client-side javascript to render a web page.  This is to reduce the amount of HTTP requests that need to be made.  When a page is loaded javascript is usually the last thing to load, and then it reprocesses the page, rendering the DOM, and then modifying HTML, such as displaying an element or setting its text.  The user can even start interacting with the website, entering text, hiding or displaying modals, all without reloading the page. 

This means that there some time after which an element is findable by webdriver, but not in its final state.  During this timespan, if an assert is done, it will be incorrect, as the element isn't actually ready to be asserted.  

Wait methods available by the webdriver api are essentially assertions that have will try multiple times before failing the test.  Even a 5 second timespan can provide enough of a cushion that test stability can be improved drastically.  

The webdriver API's wait methods are available here and can be used for any condition.  If needed, they can be simplified and included as helper methods.  

Here I created a custom wait function that I can easily call from my code.  Now this only verifies the text, but it could easily be modified to be used for any other condition in the until() function.  

def wait_for_text(value, timeout=$element_timeout)
  wait("Element #{self} text was not correct after #{timeout} seconds.  Expected '#{value}' Got '#{element.text}'",timeout).until {
    element.text.include? value.to_s
  }
  $logger.info "Verified #{self} text contains '#{value}'"  selfend


Thursday, December 11, 2014

How Do You Automate in an Agile World?

How Do You Automate in an Agile World?

Published: October 08, 2014 by Brian Kitchener
We get it, test automation is hard. It’s hard building a suite of automation that works consistently and reliably. It’s even harder to keep that suite of automation working for months or years at a time. And it’s hardest when you’re trying to do so under the rapid pace of an agile software development world. After all, the agile mantra says that we design, build, and test code in the same sprint. And most of the time the code is barely functional -- it’s only in the last several days that everything comes together to the point that the GUI is usable. How then, can we successfully build test automation when we have such a short window?
It’s actually not that difficult, assuming you have a properly designed suite of automation. In order to be able to easily maintain tests, they must be built a modular, scalable, reusable way.
However, if your automated tests look like hundreds of click commands, it becomes impossible to update them quickly when the application changes. Tests like these are typically the result of a record-and-playback strategy. The result often involves throwing away lots of test code whenever the application is changed.
Contrast this with the ProtoTest Golem framework, which offers a simple, yet powerful, method for modelling and abstracting applications: page objects. The page object model lets us represent the application in a layer of code separate from our tests. Any changes to the application necessitate only a single set of modifications to the page object, rather than re-writing lots of tests. It’s not that difficult to perform most of the updates even before the UI is fully revised.
Once the new app code is deployed, the test engineer only needs to input valid locators for each changed element, and is able to immediately run the test suite. This quick turnaround in response to changes is the benefit of our approach, allowing testers to keep pace with their agile teammates.
Now you see why it is so important to choose a tool (or framework) that enables test code reuse and minimizes the need to revise tests themselves. The more maintainable your test scripts, the more likely you will be able to move at the speed of Agile.
This is why so many of our clients choose the Golem framework. It provides a simple, clean, and repeatable process that allows your organization to implement large-scale automation successfully over the long-run.

The Value of API Testing

The Value of API Testing

Published: September 25, 2014 by Brian Kitchener
At ProtoTest, we often recommend that our clients implement a suite of automated functional API tests. And yet, many people struggle to understand how truly valuable API tests are.
Since they don’t test something directly user-facing, clients don’t want to prioritize them over creating UI tests. And because the UI tests are never finished, the API tests are never even started.
Yet if you analyze the return on investment, API automation can save the organization a significant amount of money, because the API is the backbone of the entire application stack. It is the interface that all the pieces use to communicate with each other. It’s a contract, stating exactly how each piece of the application has to work. As such, it’s the glue that holds everything together.
What tends to happen is that there are multiple client applications, such as a mobile app and a website, that both use the same services layer. In order to support a new feature on one client, the services are modified in a way that the other client doesn’t expect. This causes the other client to break, even though the release had nothing to do with it. The only way to catch this type of error is to test both clients, which costs a lot of time and money.
The API layer usually sits on top of the back end of the application stack. In a web site, it would be a series of REST services that can read and write to the database, and execute code on the server. These services are usually shared across multiple front end applications. So once it’s functionally tested we can assume that the back-end system is fully operational. This allows us to test and release it completely separately from the UI, which has a dramatic and immediate effect of reducing the cost of making changes to the back end code.
API tests allow you to test components in isolation, reducing the number of test cases each release exponentially. And since API tests run almost instantaneously, and without any human interaction, even if a defect does make it into production, it’s extremely cheap and easy to test and release a fix.
A suite of functional API tests puts the organization in a position where you can release quickly, cheaply, and independently. These are the true goals of any effective testing organization, and this is why they are one of our first recommendations.
Interested in learning how to implement a suite of automated API tests? Contact me to learn more.

Dealing With Data in Test Automation

Dealing with Data in Test Automation

Published: June 17, 2014 by Brian Kitchener
Figuring out how to manage test data can certainly be challenging. In order for an automated test to run, it needs to have some sort of data. What kind of data it needs will largely depend on the type of application being tested.
For an ecommerce site, an automated test might need a username and password, an item name to add to my cart, and a credit card number to use during checkout. However, these data items can change and fluctuate over time. Perhaps the database is refreshed, wiping out the user’s account. Perhaps the item being purchased is no longer available. Perhaps the credit card has expired.
Whatever the reason, the data used during a test can and will change over time and we need some way of dealing with it. There are four primary ways of dealing with fluctuating test data:

1 - Past State

Since tests need to use a specific data set, the first option is to simply set the application to a previous state instead of having the tests worry about data expiring. We can do this by saving the production database at a specific point in time and then refreshing our testing environment’s database back to this image before each test run. This ensures the data the test expects is always the same. This task can be performed by the IT department, developers, or QA engineers. It can also be automated with a CI server like Jenkins to happen automatically.

2 - Get State

The second option is to fetch the application’s current state, and pass that data into our test automation. We can do this by reading from the database, scraping information off the GUI, or calling a service. For example, before the automated tests attempt to add an item to a cart it makes an HTTP request to a REST service, getting the list of current and active items. We now have a set of items we know should be valid. Fetching test data can be done before each test, if the automation tool supports it. Alternately, a daily job can be scheduled to store the current data to a file and the file can be parsed by the automated suite.

3 - Set State

A second set of automated scripts exist simply to create the data needed through the application’s GUI. These may run against a back end application different than the end user application. For example, many web applications have an administrative GUI that can be used to create/delete items in the system. We can automate this application to create the users and items needed for the automated tests. As it is a separate set of scripts, these only need to be run periodically - after a database refresh, when a new environment is spun up, etc.

4 - Zero Sum

In this approach we create and delete our data as part of the test. If the scripts require a username and password, instead of assuming that the user is already created, they create the user as the first step and delete the user as the last step. This, of course, assumes that full CRUD (Create, Read, Update, Delete) functionality is available in the application. This may seem like additional work, but this functionality needs to be tested anyway. For example, we create four tests, each run sequentially: the first creates a new item in the system; the second test reads that item and verifies the data is correct; the third updates the item and verifies the update was successful; and the fourth deletes the item and confirms the item is gone.
When at all possible, a Zero Sum approach is the best as it is self-contained, very efficient, and almost forces us to have good test coverage. However, most of the time a hybrid approach is needed, using more than one in combination. For example, we might have a ‘Set State’ style script that creates the cross-test data, such as users. Then each test would try to be ‘Zero Sum’ with respect to its own data. Using the right combination of approaches can drastically reduce test execution time will also increasing coverage and efficiency.

Image Validations Using Golem

Image Validations Using Golem

Published: May 27, 2014 by Brian Kitchener
WebDriver is an amazing tool, and it has the broadest market support of any open source tool on the market. However, many commercial tools offer something that WebDriver does not: the ability to perform a visual image comparison.
Image comparisons are useful to identify visual defects, and to determine CSS problems, rendering issues, and layout defects. However, they haven’t worked too well historically, as the slightest change in the UI would cause the validation to break. In addition, full-page screenshots were typically the only images supported, and even the slightest change in layout (such as running the test on a different computer) would make the validation fail.
To this end, the Golem framework (https://github.com/ProtoTest/ProtoTest.Golem) includes code to easily perform image-based validations on individual UI elements. This allows an individual button or panel to have its UI validated. In addition, it uses a configurable “fuzzy” image comparison algorithm that will adjust for different sized images, or slight rendering differences between runs.
So, an image that is recorded at one resolution in Firefox can be compared against an image that was recorded at a different resolution in Chrome. The comparison can be adjusted with two separate parameters to allow fine-tuning for each element.
In the following example, the compared source images must be built and stored by Golem. However, the underlying methods could be used with any two images.
Image validations are currently supported only for elements. It uses the name of the element, passed in during instantiation, to save the file to the hard drive. During test execution, if no image is found on the disk the current image is saved. This means that we must first execute our test against a “good” site, to have it capture and store the images. Every subsequent run will use those images as a comparison. If a source image needs to be updated, simply delete it off the hard drive and a new image will be saved the next time the test is run.
To perform an image comparison, we simply define our element and verify the image:
[Test]
public void TestImage()
{
driver.Navigate().GoToUrl("http://www.google.com");
Element searchButton = new Element("SearchButton", By.Name("btnK"));
searchButton.Verify().Image();
}
After we execute the test the first time, the image is stored to the hard drive in: ProjectDirectory\ElementImages\SearchButton.bmp
ImageComp1
Subsequent runs will compare against this stored image. If we adjust our browser to run in chrome, the validation will pass, even though the size and font are slightly different between browsers. Now in the report we see:
ImageComp2
To see what a test failure looks like, we will first edit our image and mark it:
ImageComp3
Now when we execute the test, the report will show an error, and the difference between the images:
ImageComp4
There you have it, a simple clean way to compare the visual look of UI elements. The Golem framework contains an example test using image comparisons here:https://github.com/ProtoTest/ProtoTest.Golem/blob/master/ProtoTest.Golem/Tests/TestImageComparison.cs

The Limits of Automation

The Limits of Automated Testing

Published: May 19, 2014 by Brian Kitchener
Many people look to test automation as an alternative to manual testing, but struggle to understand how a machine can offer the same scope and coverage as a real person. The short answer is: it can't, which is why we advocate a hybrid model at ProtoTest. Both people and machines have their place in testing.
It’s easy for a person to see a button is partially hidden, or the text is the wrong size. We've come so far in technology it doesn't seem like a stretch to expect a machine to 'see' the same problems as a person. However, while it’s possible to build an automated script that checks the size and location of each button, frame, and text element, doing so is incredibly complicated, and often causes more problems than it fixes. The application’s source code literally specifies how it should function and how it should look; validating every aspect of an application would require the same amount of code as the application itself.
Businesses want automation that provides both broad coverage and minimal maintenance. However, this is the crux of the issue: the more validation required, the more complicated the automation becomes, the more maintenance required. After all, if automation truly validated every aspect of an application, it should fail after any change to the application, including new releases. The goal then, is to create an automation suite that provides breadth of coverage without significant maintenance.


For this reason, most web automation suites focus on testing an application’s functionality, and don’t bother validating how it looks, at least initially. There are a number of automation tools that support image comparisons, so it is certainly possible to perform an image-based test. However, a number of things can change from run to run making the tests problematic. Eventually, the tests become broken, and end up repeatedly failing. So, how can we maintain a large suite of tests long term?
It’s easy to build a single test that runs today; it’s much harder to build a suite of tests that will run for months and years. Therefore, it’s much better to have a test that you can run consistently with very little maintenance than it is to have a large bulky test that covers everything but constantly breaks. So instead of testing for everything, ProtoTest uses our proven prioritization approach which focuses, first on “Smoke Tests” then “Acceptance Tests”. We verify a user can log in, and can perform an action, and log out. Automation is supplemented by a manual testing effort. As we like to say at ProtoTest, "Automation makes humans more efficient, not less essential."


Testers are not robots designed to test every possible iteration through an application. Just as a script is not able to sense meaning, infer context, express empathy or frustration as a person could. Rather than expecting testers to perform as robots, or script to perform as a person, accept both testers and automations have different strengths to bring to your testing project.
Accept that automation has limits. Accept that it will not catch every random odd defect, and will not catch whether a button is slightly off its intended position. Instead, imagine a world where, with the push of a button, you can validate every major user path through your application. When the major functionality is in working order, a very small amount of manual testing can validate the look of the application. The testing team can then focus on edge cases validating all the contextual information that a human brain is so good at doing.


Using this hybrid approach, we can reduce the testing window from days to hours. This allows businesses to release their product faster, becoming more agile. Accepting a minimum amount of risk allows for a cheaper, faster release cycle. Also, rather than living in fear of releasing a major defect, businesses can spend their time and money fixing minor problems and building new functionality.

Automated Analytics Testing

Automated Analytics Testing

Published: November 25, 2013 by Brian Kitchener

Most modern web sites and mobile applications use an analytics service to keep track of a user’s actions. While there are a variety of different providers, Google Analytics and Omniture are probably the best known. Every time the user does something important (like logging in, or adding an item to their cart) an HTTP call is sent to a server with information about the user. They do this for a variety of reasons. The product owner may want to know how many people end up purchasing an item after browsing. The developers may want to figure out what percentage of logins are unsuccessful. There are many reasons that keeping tracking of each user is important. In fact, some web sites generate revenue on these calls. For example, every time an advertisement is displayed to a user an analytics call is made to track it and a client is billed based upon it. So validating that these calls are happening correctly is vitally important, but it actually presents a variety of difficulties in trying to test it.

Testing Analytics

Testing analytics manually is a straightforward, yet tedious, process. Since the client sends HTTP requests to a third party server, nothing will appear on the web page, and nothing will show up in our application’s server logs. To validate the traffic was sent, the tester must proxy their web browser through some sort of HTTP proxy like Charles or Fiddler. These tools will record and track all HTTP calls sent from the client. The tester will then perform a specific action (like logging in), and verify that the appropriate HTTP calls were sent, and that they contained the correct information. Each call may have upwards of ten to twenty parameters that need to be validated and a single scenario can have multiple Business Intelligence (BI) calls sent. Since there can be dozens, if not hundreds of these tests necessary to validate, testing this manually can take days or weeks.

Using Automation

Automating this scenario is challenging, but not nearly as tedious as validating it manually. Just like the manual steps, this will involve an HTTP Proxy called BrowserMobProxy (http://bmp.lightbody.net/). BrowserMobProxy operates just like any other proxy, except that it has a REST API that can be queried to start and stop recording, and to fetch the list of HTTP calls. So testing these calls in an automated fashion is straightforward in any automation tool that supports REST. First we install BrowserMobProxy and run it. Second, we proxy our mobile device or web browser through the proxy, causing it to record all the traffic. Third, we execute our test, and once it is complete we make a GET request against BrowserMobProxy’s REST API to get the list of HTTP calls that it recorded. Validating that a specific request was sent becomes as easy as verifying a string is in the HTTP response body. Not every automated testing tool will work. We typically use a code-based tool like Selenium-WebDriver, or a GUI-based tool that supports both UI tests and HTTP requests such as SOASTA’s TouchTest. Keep in mind that if you are using TouchTest the mobile device, proxy, and SOASTA server all must be on an externally-visible network.
As you can see, automated testing of analytics calls is fairly simple, once we know what the process is, and what tools to use. We kick off a test and once it is done, query our proxy to verify the correct requests were sent. And while it may take several days or even weeks to build the automated tests and set up the proxy, once everything is working it will drastically reduce the amount of manual work necessary to validate a release.