Testing’s Evolution That Did More Harm Than Good

By Dan Laun, General Manager, Testing at Perforce.

Despite advances in software testing over the past couple of decades — automation, continuous testing, low-code/no-code platforms, shift-left and more — one thing has never changed: testing is still inefficient, time-consuming and slows down software development projects. Those advances have helped to some extent, but critical aspects such as test validation is still cumbersome, high-maintenance and error-prone. 

 

The testing community around the world is focused on tackling these persistent problems and, not surprisingly, is looking at whether AI is the answer. Consequently, copilots are now being used in testing processes such as web, mobile and performance test validation, but they do not address some of testing’s bigger challenges and risk adding yet more effort to what is already a broken process. They may speed up writing test scripts (the instructions to the automated system about a test), but users have reported that copilots create tests, generate a higher volume of tests, which do not contribute to better products and, instead, lead to yet more maintenance.  

 

Just applying AI to an old paradigm is not the answer, in the same way that the rush to the cloud did not solve everyone’s problems. So, we must stand back and look at the bigger picture. How do we use AI to deal with testing’s problems and really add value to the process? To explain that, it makes sense to quickly recap on how testing has evolved and where it stands today.  

 

Testing’s evolution 

Over the years, automated testing innovations have found ways to identify objects on screen, with users writing test scripts that tell the automated solution what to test. All these developments have had their benefits, but the same problem still remains: as soon as any changes are made to the user interface or applications, the test scripts are broken. The challenge is that scripts are trying to imitate how a human interacts with a screen but using machine language and that is hard. For instance, as humans, we know how to close a pop-up window, but test automation has to be taught how to perform those tasks.  

 

The result of all this is a huge maintenance overhead. To put this into perspective, test professionals can spend as much as 30 to 40% of their time just maintaining tests.  

 

Let’s take regression testing as an example. Every change made to an app or UI means that regression tests must be rerun for issues, yet estimates suggest that only 20% fail consistently and many are invalid, with the rest producing errors that take time to analyse whether or not they are relevant. Consequently, test regression suites keep growing, nullifying much of the benefits of moving from manual to automated processes.  

 

So, it is no surprise that many teams depend on manual testing, but that is not a sustainable approach. Apart from the fact that inefficient testing can lead to software vulnerabilities or performance issues, the scale at which software is accelerating means that manual processes cannot keep pace. These days, there can easily be hundreds of people involved with thousands of tests, which do not impact just one system but others that may be mission-critical to companies or their customers. The more time left to discover a problem, the harder and more expensive it becomes to fix. Crowdstrike was a classic example of what can go wrong. 

 

AI to the rescue? 

Given how AI is transforming so many aspects of our world, exploring how it could help address software testing’s challenges once and for all makes sense. However, there is no point in adding AI if it just executes bad processes faster and creates more content in a regression suite. Plus, it is logical to start applying AI to where it could make the most difference: test maintenance and, at the same time, go beyond what copilots are able to achieve. How about removing the need for those on-screen object locators and even removing the necessity to write test scripts? After all, regardless of what tool is used, writing test scripts does require knowledge and learning how to use scripting tools. 

 

The breakthrough, which leapfrogs the need for object locators and scripts, visualises what is on the screen but also uses AI to understand it in context and dynamically. Using natural language processing, the user asks simple questions that AI analyses and validates, so dependence on objects, code or even the technologies within an application is removed. The result is that tests can be validated in real-time across any technology platform and by anyone; plus, when changes are made to an application, the need for manual intervention is removed, and tests can be reused repeatedly. 

 

In addition, early experimenters with this new AI-drive approach to testing have shared they have been able to validate data that was impossible to do before. For instance, one financial institution says that it can now check that the graph shown on a screen matches the table of data in the text beneath. A large retail organisation sees this as a way to make sure that the words describing a product match the image shown. Then, the retailer can replicate that test across multitudes of products, all without manual intervention or coding test scripts.  

 

Furthermore, the combination of AI and natural language processing means that people without any testing knowledge — even from a non-software background — can carry out tests, because the abstraction layer of script-writing is removed. In turn, this helps to address the huge shortfall of test expertise in the industry and support strategies to Shift-left and test continuously from the first to the last step of software development. Low-code/no-code has been a buzzphrase for several years, but this is a true example of the latter.  

 

However, test validation is just the starting point. AI has the potential to free team members from even more routine testing tasks so that they can refocus their time on more valuable work (for instance, more effort on working out what to test). Software testing has been broken for a long time, but with AI and other technological advancements, testing can be transformed, removing laborious error-prone processes and reducing time and costs which can be reinvested into other aspects of improving software. For so long, testing has seemed to many as an impossible mountain, one that has failed to keep up with the rest of software development. That is now changing and testing can evolve from a bottleneck to a business asset.  

React Native is a popular cross-platform app development framework used for thousands of software...
By Anugraha Benjamin, Manager, Infrastructure at Progress.
By Hans De Visser, Chief Product Officer, Mendix.