Thursday, December 11, 2014

How Do You Automate in an Agile World?

How Do You Automate in an Agile World?

Published: October 08, 2014 by Brian Kitchener
We get it, test automation is hard. It’s hard building a suite of automation that works consistently and reliably. It’s even harder to keep that suite of automation working for months or years at a time. And it’s hardest when you’re trying to do so under the rapid pace of an agile software development world. After all, the agile mantra says that we design, build, and test code in the same sprint. And most of the time the code is barely functional -- it’s only in the last several days that everything comes together to the point that the GUI is usable. How then, can we successfully build test automation when we have such a short window?
It’s actually not that difficult, assuming you have a properly designed suite of automation. In order to be able to easily maintain tests, they must be built a modular, scalable, reusable way.
However, if your automated tests look like hundreds of click commands, it becomes impossible to update them quickly when the application changes. Tests like these are typically the result of a record-and-playback strategy. The result often involves throwing away lots of test code whenever the application is changed.
Contrast this with the ProtoTest Golem framework, which offers a simple, yet powerful, method for modelling and abstracting applications: page objects. The page object model lets us represent the application in a layer of code separate from our tests. Any changes to the application necessitate only a single set of modifications to the page object, rather than re-writing lots of tests. It’s not that difficult to perform most of the updates even before the UI is fully revised.
Once the new app code is deployed, the test engineer only needs to input valid locators for each changed element, and is able to immediately run the test suite. This quick turnaround in response to changes is the benefit of our approach, allowing testers to keep pace with their agile teammates.
Now you see why it is so important to choose a tool (or framework) that enables test code reuse and minimizes the need to revise tests themselves. The more maintainable your test scripts, the more likely you will be able to move at the speed of Agile.
This is why so many of our clients choose the Golem framework. It provides a simple, clean, and repeatable process that allows your organization to implement large-scale automation successfully over the long-run.

The Value of API Testing

The Value of API Testing

Published: September 25, 2014 by Brian Kitchener
At ProtoTest, we often recommend that our clients implement a suite of automated functional API tests. And yet, many people struggle to understand how truly valuable API tests are.
Since they don’t test something directly user-facing, clients don’t want to prioritize them over creating UI tests. And because the UI tests are never finished, the API tests are never even started.
Yet if you analyze the return on investment, API automation can save the organization a significant amount of money, because the API is the backbone of the entire application stack. It is the interface that all the pieces use to communicate with each other. It’s a contract, stating exactly how each piece of the application has to work. As such, it’s the glue that holds everything together.
What tends to happen is that there are multiple client applications, such as a mobile app and a website, that both use the same services layer. In order to support a new feature on one client, the services are modified in a way that the other client doesn’t expect. This causes the other client to break, even though the release had nothing to do with it. The only way to catch this type of error is to test both clients, which costs a lot of time and money.
The API layer usually sits on top of the back end of the application stack. In a web site, it would be a series of REST services that can read and write to the database, and execute code on the server. These services are usually shared across multiple front end applications. So once it’s functionally tested we can assume that the back-end system is fully operational. This allows us to test and release it completely separately from the UI, which has a dramatic and immediate effect of reducing the cost of making changes to the back end code.
API tests allow you to test components in isolation, reducing the number of test cases each release exponentially. And since API tests run almost instantaneously, and without any human interaction, even if a defect does make it into production, it’s extremely cheap and easy to test and release a fix.
A suite of functional API tests puts the organization in a position where you can release quickly, cheaply, and independently. These are the true goals of any effective testing organization, and this is why they are one of our first recommendations.
Interested in learning how to implement a suite of automated API tests? Contact me to learn more.

Dealing With Data in Test Automation

Dealing with Data in Test Automation

Published: June 17, 2014 by Brian Kitchener
Figuring out how to manage test data can certainly be challenging. In order for an automated test to run, it needs to have some sort of data. What kind of data it needs will largely depend on the type of application being tested.
For an ecommerce site, an automated test might need a username and password, an item name to add to my cart, and a credit card number to use during checkout. However, these data items can change and fluctuate over time. Perhaps the database is refreshed, wiping out the user’s account. Perhaps the item being purchased is no longer available. Perhaps the credit card has expired.
Whatever the reason, the data used during a test can and will change over time and we need some way of dealing with it. There are four primary ways of dealing with fluctuating test data:

1 - Past State

Since tests need to use a specific data set, the first option is to simply set the application to a previous state instead of having the tests worry about data expiring. We can do this by saving the production database at a specific point in time and then refreshing our testing environment’s database back to this image before each test run. This ensures the data the test expects is always the same. This task can be performed by the IT department, developers, or QA engineers. It can also be automated with a CI server like Jenkins to happen automatically.

2 - Get State

The second option is to fetch the application’s current state, and pass that data into our test automation. We can do this by reading from the database, scraping information off the GUI, or calling a service. For example, before the automated tests attempt to add an item to a cart it makes an HTTP request to a REST service, getting the list of current and active items. We now have a set of items we know should be valid. Fetching test data can be done before each test, if the automation tool supports it. Alternately, a daily job can be scheduled to store the current data to a file and the file can be parsed by the automated suite.

3 - Set State

A second set of automated scripts exist simply to create the data needed through the application’s GUI. These may run against a back end application different than the end user application. For example, many web applications have an administrative GUI that can be used to create/delete items in the system. We can automate this application to create the users and items needed for the automated tests. As it is a separate set of scripts, these only need to be run periodically - after a database refresh, when a new environment is spun up, etc.

4 - Zero Sum

In this approach we create and delete our data as part of the test. If the scripts require a username and password, instead of assuming that the user is already created, they create the user as the first step and delete the user as the last step. This, of course, assumes that full CRUD (Create, Read, Update, Delete) functionality is available in the application. This may seem like additional work, but this functionality needs to be tested anyway. For example, we create four tests, each run sequentially: the first creates a new item in the system; the second test reads that item and verifies the data is correct; the third updates the item and verifies the update was successful; and the fourth deletes the item and confirms the item is gone.
When at all possible, a Zero Sum approach is the best as it is self-contained, very efficient, and almost forces us to have good test coverage. However, most of the time a hybrid approach is needed, using more than one in combination. For example, we might have a ‘Set State’ style script that creates the cross-test data, such as users. Then each test would try to be ‘Zero Sum’ with respect to its own data. Using the right combination of approaches can drastically reduce test execution time will also increasing coverage and efficiency.

Image Validations Using Golem

Image Validations Using Golem

Published: May 27, 2014 by Brian Kitchener
WebDriver is an amazing tool, and it has the broadest market support of any open source tool on the market. However, many commercial tools offer something that WebDriver does not: the ability to perform a visual image comparison.
Image comparisons are useful to identify visual defects, and to determine CSS problems, rendering issues, and layout defects. However, they haven’t worked too well historically, as the slightest change in the UI would cause the validation to break. In addition, full-page screenshots were typically the only images supported, and even the slightest change in layout (such as running the test on a different computer) would make the validation fail.
To this end, the Golem framework (https://github.com/ProtoTest/ProtoTest.Golem) includes code to easily perform image-based validations on individual UI elements. This allows an individual button or panel to have its UI validated. In addition, it uses a configurable “fuzzy” image comparison algorithm that will adjust for different sized images, or slight rendering differences between runs.
So, an image that is recorded at one resolution in Firefox can be compared against an image that was recorded at a different resolution in Chrome. The comparison can be adjusted with two separate parameters to allow fine-tuning for each element.
In the following example, the compared source images must be built and stored by Golem. However, the underlying methods could be used with any two images.
Image validations are currently supported only for elements. It uses the name of the element, passed in during instantiation, to save the file to the hard drive. During test execution, if no image is found on the disk the current image is saved. This means that we must first execute our test against a “good” site, to have it capture and store the images. Every subsequent run will use those images as a comparison. If a source image needs to be updated, simply delete it off the hard drive and a new image will be saved the next time the test is run.
To perform an image comparison, we simply define our element and verify the image:
[Test]
public void TestImage()
{
driver.Navigate().GoToUrl("http://www.google.com");
Element searchButton = new Element("SearchButton", By.Name("btnK"));
searchButton.Verify().Image();
}
After we execute the test the first time, the image is stored to the hard drive in: ProjectDirectory\ElementImages\SearchButton.bmp
ImageComp1
Subsequent runs will compare against this stored image. If we adjust our browser to run in chrome, the validation will pass, even though the size and font are slightly different between browsers. Now in the report we see:
ImageComp2
To see what a test failure looks like, we will first edit our image and mark it:
ImageComp3
Now when we execute the test, the report will show an error, and the difference between the images:
ImageComp4
There you have it, a simple clean way to compare the visual look of UI elements. The Golem framework contains an example test using image comparisons here:https://github.com/ProtoTest/ProtoTest.Golem/blob/master/ProtoTest.Golem/Tests/TestImageComparison.cs

The Limits of Automation

The Limits of Automated Testing

Published: May 19, 2014 by Brian Kitchener
Many people look to test automation as an alternative to manual testing, but struggle to understand how a machine can offer the same scope and coverage as a real person. The short answer is: it can't, which is why we advocate a hybrid model at ProtoTest. Both people and machines have their place in testing.
It’s easy for a person to see a button is partially hidden, or the text is the wrong size. We've come so far in technology it doesn't seem like a stretch to expect a machine to 'see' the same problems as a person. However, while it’s possible to build an automated script that checks the size and location of each button, frame, and text element, doing so is incredibly complicated, and often causes more problems than it fixes. The application’s source code literally specifies how it should function and how it should look; validating every aspect of an application would require the same amount of code as the application itself.
Businesses want automation that provides both broad coverage and minimal maintenance. However, this is the crux of the issue: the more validation required, the more complicated the automation becomes, the more maintenance required. After all, if automation truly validated every aspect of an application, it should fail after any change to the application, including new releases. The goal then, is to create an automation suite that provides breadth of coverage without significant maintenance.


For this reason, most web automation suites focus on testing an application’s functionality, and don’t bother validating how it looks, at least initially. There are a number of automation tools that support image comparisons, so it is certainly possible to perform an image-based test. However, a number of things can change from run to run making the tests problematic. Eventually, the tests become broken, and end up repeatedly failing. So, how can we maintain a large suite of tests long term?
It’s easy to build a single test that runs today; it’s much harder to build a suite of tests that will run for months and years. Therefore, it’s much better to have a test that you can run consistently with very little maintenance than it is to have a large bulky test that covers everything but constantly breaks. So instead of testing for everything, ProtoTest uses our proven prioritization approach which focuses, first on “Smoke Tests” then “Acceptance Tests”. We verify a user can log in, and can perform an action, and log out. Automation is supplemented by a manual testing effort. As we like to say at ProtoTest, "Automation makes humans more efficient, not less essential."


Testers are not robots designed to test every possible iteration through an application. Just as a script is not able to sense meaning, infer context, express empathy or frustration as a person could. Rather than expecting testers to perform as robots, or script to perform as a person, accept both testers and automations have different strengths to bring to your testing project.
Accept that automation has limits. Accept that it will not catch every random odd defect, and will not catch whether a button is slightly off its intended position. Instead, imagine a world where, with the push of a button, you can validate every major user path through your application. When the major functionality is in working order, a very small amount of manual testing can validate the look of the application. The testing team can then focus on edge cases validating all the contextual information that a human brain is so good at doing.


Using this hybrid approach, we can reduce the testing window from days to hours. This allows businesses to release their product faster, becoming more agile. Accepting a minimum amount of risk allows for a cheaper, faster release cycle. Also, rather than living in fear of releasing a major defect, businesses can spend their time and money fixing minor problems and building new functionality.

Automated Analytics Testing

Automated Analytics Testing

Published: November 25, 2013 by Brian Kitchener

Most modern web sites and mobile applications use an analytics service to keep track of a user’s actions. While there are a variety of different providers, Google Analytics and Omniture are probably the best known. Every time the user does something important (like logging in, or adding an item to their cart) an HTTP call is sent to a server with information about the user. They do this for a variety of reasons. The product owner may want to know how many people end up purchasing an item after browsing. The developers may want to figure out what percentage of logins are unsuccessful. There are many reasons that keeping tracking of each user is important. In fact, some web sites generate revenue on these calls. For example, every time an advertisement is displayed to a user an analytics call is made to track it and a client is billed based upon it. So validating that these calls are happening correctly is vitally important, but it actually presents a variety of difficulties in trying to test it.

Testing Analytics

Testing analytics manually is a straightforward, yet tedious, process. Since the client sends HTTP requests to a third party server, nothing will appear on the web page, and nothing will show up in our application’s server logs. To validate the traffic was sent, the tester must proxy their web browser through some sort of HTTP proxy like Charles or Fiddler. These tools will record and track all HTTP calls sent from the client. The tester will then perform a specific action (like logging in), and verify that the appropriate HTTP calls were sent, and that they contained the correct information. Each call may have upwards of ten to twenty parameters that need to be validated and a single scenario can have multiple Business Intelligence (BI) calls sent. Since there can be dozens, if not hundreds of these tests necessary to validate, testing this manually can take days or weeks.

Using Automation

Automating this scenario is challenging, but not nearly as tedious as validating it manually. Just like the manual steps, this will involve an HTTP Proxy called BrowserMobProxy (http://bmp.lightbody.net/). BrowserMobProxy operates just like any other proxy, except that it has a REST API that can be queried to start and stop recording, and to fetch the list of HTTP calls. So testing these calls in an automated fashion is straightforward in any automation tool that supports REST. First we install BrowserMobProxy and run it. Second, we proxy our mobile device or web browser through the proxy, causing it to record all the traffic. Third, we execute our test, and once it is complete we make a GET request against BrowserMobProxy’s REST API to get the list of HTTP calls that it recorded. Validating that a specific request was sent becomes as easy as verifying a string is in the HTTP response body. Not every automated testing tool will work. We typically use a code-based tool like Selenium-WebDriver, or a GUI-based tool that supports both UI tests and HTTP requests such as SOASTA’s TouchTest. Keep in mind that if you are using TouchTest the mobile device, proxy, and SOASTA server all must be on an externally-visible network.
As you can see, automated testing of analytics calls is fairly simple, once we know what the process is, and what tools to use. We kick off a test and once it is done, query our proxy to verify the correct requests were sent. And while it may take several days or even weeks to build the automated tests and set up the proxy, once everything is working it will drastically reduce the amount of manual work necessary to validate a release.

The Automation Pyramid

The Automation Pyramid

Published: October 10, 2013 by Brian Kitchener

Across the world, in many automation models, the concept of the ‘automation pyramid’ is spoken about. The idea is fairly simple. There are multiple layers of automation, each focusing on different areas of the application, and offering a different degree of coverage. The base of the pyramid are the Unit Tests, these are executed against the code. Next is the API test, executing against a service layer. Finally, at the top of the pyramid sits the UI tests that actually validate the application as a whole. Each of these layers offers a different level of coverage. Unit Tests are fast and can test many different permutations, but don’t test any integrations between components. API tests can validate our units integrate, but can’t test an end-to-end user scenario. UI tests validate multiple components in a single test, but are typically very fragile and take a long time to run. So each layer provides a pivotal role, and needs to be tested thoroughly.



The Base of the Pyramid

At the base of the pyramid are the Unit Tests. As our foundation, they will have the broadest coverage, and should test as many permutations as possible. Typically they will test a specific “Unit” of code. They will instantiate a class, call some functions, and verify the functions return the correct results. Unit Tests are built in the same language as the code and will usually be stored and executed with it. Because they are executing a piece of code, it’s extremely easy to test for many different permutations. For example, if I have a function that does some simple arithmetic, I could test it 100 different ways in a couple seconds. It could take hours to verify the logic using the UI, because of all the navigation and setup required. Because of this, our unit tests should provide extremely broad coverage, testing as many permutations as possible. They should attempt to test and validate important business logic at a component level. However remember that they are unit tests, and will not test how components work together, only that they individually work as expected.

The Middle of the Pyramid

The next level of our pyramid is made up of API/Service Layer Tests. These should fit somewhere in between UI and Unit tests in complexity and scope. This layer is where we first start to test how our components integrate together, and how they handle real data. The advantage of API testing is that a lot of logic can be validated without being dependent upon the UI. For example, let’s suppose we are building a new web site. Even before the UI is finished, the service layer and authentication mechanisms could be finished. I can build a set of API tests that verify the login service works as expected. And while the tests aren’t quite as fast as Unit tests, you can expect most API’s to return a result in less than a second. That means that testing for a variety of conditions such as invalid credentials, timeouts, or special characters is very easy and very fast. This frees up our UI tests to focus on testing the UI and end-user functionality, instead of trying to validate every piece of business logic. Without our API tests, we wouldn’t even be able to start testing until the UI is finished.

The Top of the Pyramid

The highest level, and therefore the smallest layer, is comprised of the UI tests. It is at this level that we actually launch our application and perform an end-to-end test through it. It is at this point when we finally verify that all the individual pieces work together as a cohesive whole. But since much of the business logic has been covered in the lower levels, the tests can focus on validating the UI looks and behaves appropriately. It’s important to remember that UI tests are extremely slow in comparison with our UI and API level tests. A UI test will take usually a minute or so, when a API test takes several seconds and a unit test takes less than a second. But a single UI test will validate hundreds of components in a single test, so its coverage is much deeper than the other layers. However the UI is constantly changing and being updated by new functionality, which means that the UI tests are inherently less stable, and will require more maintenance than the other layers. This is why we try to limit our testing in this layer to as little as possible.
failed pyramid
By breaking the application apart into these layers we are able to systematically and logically break apart the application and test it in components. Without a strong foundation of Unit and API tests, we are forced to automate every possible scenario in the UI layer. And while it’s still possible to automate only UI tests, our resulting automation suite ends up being expensive, slow, and fairly fragile. And while it’s possible to balance a pyramid on its end, it certainly won’t be stable. It is for this reason that many companies fail at efforts to implement effective automation strategies. If we take a systematic approach, and all the levels are automated to an appropriate degree, we can build and maintain a cohesive suite of automated tests with much less effort.

Tuesday, April 8, 2014

Introducing Golem, an Object Oriented C# Framework

Hello everyone, I am pleased to announce the release of Golem, an open source, object oriented C# framework, now available on GitHub.   Golem was an internal tool that ProtoTest has successfully used on a number of our clients' projects, and we like it so much we want to share it with everyone.  It supports a number of automation tools like Selenium-WebDriver, Appium, Microsoft's UIAutomation, and can even test REST services and validate HTTP Traffic.  Tests are written in Visual Studio, using MbUnit, and are executed using Gallio. It's an all-in-one automation tool for anyone working in a .Net environment.  Golem makes building clean, robust, reusable, and scale-able automation easy.  It's available in NuGet now!

Golem has a number of advantages :   
  1. Simple, Object-Oriented API
  2. Advanced features (data driven testing, parallel test execution)
  3. Robust Reporting and Logging 
  4. Multiple Automation Tools Supported
  5. Fully configurable through an App.Config or through code.  
You can find our official announcement below : (Copied from ProtoTest's Blog)


Prepare yourselves; Golem is coming. A creature from myth and legend returns, reborn for the new age. In ancient folklore, a Golem is a mindless automaton, an unstoppable force, yet it obeys those bold enough to command it. And therein lies the danger, for it will perform any command given to it, faithfully and without rest. For it has no mind of its own, it requires an intelligent being to control it. But beware, the ancient proverb ‘He who rides a tiger is afraid to dismount’ has never been truer: holding power is both addicting and hard to relinquish. Once the power of Golem is yours to command, you cannot go back.
 

Introducing Golem

At ProtoTest, we are passionate about the value of test automation. In fact, our motto is: “Automation makes humans more efficient, not less essential.” We think it is a great way to supplement any manual testing effort. Yet we see a number of people (and companies) struggling to implement test automation in a meaningful way.
There is a qualitative leap between recording and playing back a test and building an enterprise-scale automation suite maintained by dozens of people. This is where a test automation framework steps in. It helps to simplify the process of building, maintaining, and executing a large set of automated tests.
Most automation frameworks have three goals: to simplify the process of building tests, to help diagnose why tests fail, and to allow us to share and reuse code. And while there are a number of automation frameworks on the market, we could not find one that provided the level of simplicity, reusability, and elegance we wanted.
So, we decided to make our own. For the past several years, we have been building and tweaking our own framework in C#. We built the framework around MbUnit and Gallio, because they have advanced features for UI based automation. We included support for a number of tools like Selenium-WebDriver, Microsoft’s UIAutomation, and Appium.
Golem supports most commonly tested enterprise platforms: web browsers, mobile applications, Windows applications, HTTP traffic, and REST services. Tests are written using the industry standard, ‘page object,’ design pattern, and the test report includes as much diagnostic information as we could gather.
In addition, we added several advanced features like data driven testing and parallel test execution. Then we made all of it easily configurable. And now, we want to share it with the world. We are officially announcing the release of the Golem open source project, developed by ProtoTest.
The user group : https://groups.google.com/forum/#!forum/prototest-golem
The source code is available on GitHub: https://github.com/ProtoTest/ProtoTest.Golem
The package is available from nuget: https://www.nuget.org/packages/ProtoTest.Golem/
The documentation is available here : https://github.com/ProtoTest/ProtoTest.Golem/wiki

Tuesday, April 1, 2014

3 Cool things you can do with Javascript Injection

Learning how to interact with or modify an application directly is one of the more advanced things to learn as a tester.  But once we have a hook into an application we can do a variety of things: accessing internal variables, calling methods, manipulating the application’s state, or even modifying the code ourselves.  For some types of applications this is extremely hard, for others it’s relatively easy.  One of the easiest types of applications to manipulate directly is a web page.  This is because most of the code is stored locally on the client, none of it is obfuscated, and all of it is modifiable.  This means that there are a variety of fun things we can do through any browser with a console like Firefox or Chrome. 

You can get to the console in Firefox or chrome by right clicking, selecting inspect element, and clicking on the console tab when the new panel opens.  Any commands we enter into the prompt will be fired against the web page.   Just remember that if a new page is loaded, or if the browser refreshes, anything you do is lost.    Alternately, many automated testing tools provide a way to execute code against the page, and this provides an easy mechanism to manipulate the application automatically. 

1) Modify or execute the code

One of the most useful things to do when testing a web site is to access its internal variables and methods.  This allows us to modify the web page even without a UI.  For instance, suppose that after 60 minutes of inactivity the user is supposed to get a prompt asking them to stay logged in.  We certainly don’t want to have to let our computer idle for an hour every time we test this.   Instead, we can set the web page to timeout after 1 minute by changing the time from 60 minutes to 1.  You can typically ask a developer what the variable is called, or use the console to try to find it.     
So let’s assume a developer told us that the variable was named timeoutMin.  Modifying it is easy.  From the console enter:  document.timeoutMin = 60;  Now the web page should use the new value instead of the old one. 

2) Hiding / Showing Elements

Occasionally when working with a web site that is under development, something won’t display correctly.  For example, an extra panel appears covering the web page.  This prevents you from being able to do any work.  Or perhaps the login panel doesn’t appear.  We are prevented from testing any functionality that requires a login until that issue is fixed. 
Hiding or showing elements is easy, provided they have an Id, class, or name.  Using chrome, you can right click on the element, and select inspect element.  If the element’s HTML contains an id attribute you can use it to manipulate the object.  If it doesn’t have an id, you can even add one, or try getElementByClassName, getElementByName, or getElementByTagName.
The command to hide an element :  document.getElementById(“idOfElement”).style.visibility='hidden’;
The command to show an element :  document.getElementById(“idOfElement”).style.visibility=’visible’;

3) Adding / Removing Page Events

A web page works by registering functions to happen when certain events happen.  There are a variety of different types of events that are called whenever the user clicks, types, or moves the mouse.  For example, a button can have a function registered to the click event called “onclick”.  When the user clicks on the button, the function is called.  If we want we can add, delete, or replace these events with our own.  Let’s look at three examples:
1) To illustrate how to replace an event we will try to disable all click events on the page.  To achieve this we replace the document’s onclick function with one that does nothing. 
document.onclick = function() { return false; };
Most click actions on the page are now disabled.
2) If we don’t want to disable them all let’s add an additional event without removing the old one.  We do this by adding a new listener.  We can add an event to either the entire page, or to a specific element.  For example, let’s suppose I wanted to highlight the element that my mouse is over. I’m going to add two listeners, one to highlight an element under my mouse, and one to un-highlight when the mouse leaves. 
document.addEventListener('mouseover', function(e) { e = e || window.event; e.target.style.border='3px solid red}, false);
document.addEventListener('mouseout', function(e) { e = e || window.event; e.target.style.border=''}, false);
3) Lastly, let’s suppose I wanted to add an alert message when I click the login button.  This will “Pause” the web page and allow me to inspect traffic, html, etc. 
document.getElementById(“idOfElement”). addEventListener(onclick, function(e) { e = e || window.event; }, Alert(“Element was clicked”); false);

 As you can see, there are a variety of reasons why we might need to modify a web page.  It’s not the sort of thing that will needed every day, but is a great extra tool to be added to any SQE’s tool belt.