Benefits of Visual Testing in Development Lifecycle
There’s no doubt test automation is a staple in software development, and companies have increasingly started to utilize it. It has also proven several times that it is a sound investment. With mature tools such as Selenium and Appium, and newer ones like Cypress and Playwright, users have a wide variety of open-source tools they can choose from. But sooner or later, in many projects, there comes a point where doing assertions on DOM elements is not enough.
What do you do, when you’re working on a project that handles 3D objects, and you want to make sure the application generates them the same way each time, depending on the input parameters? Or when, despite your best efforts to cover every scenario, the CI pipeline just didn’t catch two elements suddenly covering each other? Even if you only want to make sure that an element did not shift by a single pixel, visual testing has you covered. Whether you are an engineer, quality assurance analyst, or tech enthusiast seeking to understand the importance of visual testing, this blog is for you. By the end of it, you will have a comprehensive understanding of how visual testing can optimize the quality and performance of your software applications.
What is visual testing?
Visual testing is a way of making sure something in the user interface did not change by taking screenshots or snapshots of the application and comparing them to each other.
There are two steps needed to perform a successful screenshot comparison:
Step 1:
Take an original screenshot, which all other iterations of the test execution will compare to. This would ideally be done only once (or as rarely as possible). It is essential that the application or functionality be verified at this stage. During regular test executions, this step is disabled. The screenshot has to be suitably named and stored at a location where the test runner has permission to read or write files.
Step 2:
Compare the current snapshot with the screenshot taken in the first step. This function should take the original screenshot’s location as a parameter.
These are the basics of visual testing. However, we can customize these steps to fit our needs in most testing frameworks. We can add several more parameters, such as image cropping, masking, or increasing or decreasing the tolerance for how many pixels are allowed to be different before the test fails.
How can you tell what’s wrong when a test fails?
When a screenshot comparison test fails, it usually results in an output file. The two screenshots are compared side by side, and if there are any deviations in the pixels of those two images, i.e. if the pixels are different colors, the result is a visual diff file.
As an example, we would like to find the differences between these two images:
After running the check, we get this output diff image:
The red parts of the image are the differences, which gives us a clear indication of where the problem might be. The parts that stay the same are usually faded out.
What can you do if you have dynamic elements in your user interface?
A lot of times, if not most, you will have to do some tinkering with the comparisons. There might be a carousel in your application that rotates articles or images; there might be a counter that is updated regularly, and there might be some randomly generated IDs displayed when generating a new model. In any case, you have to decide if you will crop the screenshots, mask the dynamic area, or increase the tolerance threshold, depending on your needs and framework. You can also only take screenshots of specific elements.
The easiest and least effective way of handling these issues is by only comparing specific elements. If your framework allows it, you will specify taking a screenshot of an element, like a canvas or an iframe. Be cautious, because if you rely too much on this method to do your visual testing, you might have more comparisons in your tests than you can feasibly maintain. Use this approach when you only have to compare a specific part of your application.
Another way of handling these issues is via image cropping. Before saving the original screenshot, create or use existing logic that will crop the image based on the parameters you use. Think of the crop area as a rectangle you draw on top of the image. You have to specify the X and Y coordinates of the top-left corner and the width and height of said rectangle.
Some frameworks allow you to mask the screenshots you make. You can think of it as reverse cropping. You draw a rectangle the same way as you would before, but instead of cropping the image, a mask is applied to it, which will cover the image in a solid color. This will effectively ignore that part of the screenshot when doing comparisons.
The last way of ensuring there are as few false positives as possible is by increasing or decreasing the tolerance of the comparisons. There might be some minor pixel differences in the user interface between test runs that you, the customer, or the user might find irrelevant. Or when dealing with 3D printed objects, sometimes the supports are generated just a minuscule amount differently each time, which is expected behavior. In this case, you can simply tinker with this parameter, but make sure you don’t increase it too much, or you might start dealing with false negatives.
Most of the time, you will do a combination of the above.
How can you include diff images in your reports?
Some reporting tools enable you to include screenshot links, while others allow you to edit the report HTML template directly and embed them. It is advised to check how easily your reports can include custom screenshots. Either way, you will have to include the path to where the images are stored.
Where should you save screenshots and/or diff images?
As with all of the specifics, it depends on the needs of your project. In a simple application where only manual test runs are done, it should be enough if the screenshots are saved in a local folder in the project itself. But for any larger project where, for example, the tests are going to be run in a CI pipeline, you have several possibilities.
If you have configured your server to purge all test data after each run, and for any reason, you can’t exclude the screenshot output folder, the screenshots need to be hosted externally. This may include a different place on the server, a different server, or maybe even an S3 bucket.
Another possibility is to configure a job that sends out an email with an archive, or a link to it, that includes the report and the images.
Caveats
There are some things you have to be aware of that might require some extra effort on your part. First off, the user interface might differ slightly from browser to browser, meaning that in most cases, you will have to make browser-specific screenshots. Additionally, for the best possible results, it is recommended you run the tests in headless mode with a pre-determined resolution. If you collaborate with multiple people, the non-headless runs can be different depending on the user’s display settings, such as resolution and display scaling. Because of this, debugging problems locally on your machine might prove more difficult when you choose to use non-headless mode.
Visual testing is a valuable addition to the arsenal of software testing techniques. By capturing screenshots or snapshots of the application and comparing them, visual testing enables you to identify differences and deviations in the user interface. The visual diff files generated from these comparisons pinpoint the exact areas that require attention, making it easier to diagnose and resolve issues. Dynamic elements in the user interface pose a challenge, but there are techniques to handle them effectively. In summary, visual testing provides a comprehensive and powerful approach to optimizing the quality and performance of software applications. By leveraging the capabilities of visual testing tools and incorporating them into your testing strategy, you can enhance the reliability and user experience of your software products.