Best practices for test automation emphasize reliability, portability, reusability, readability, maintainability, and more. But how can your existing automated test suite adopt these qualities? Should you address these issues with your current tests, or create an entirely new set of tests? Here are some questions that will help you determine if your test automation maintenance program is operating as it should be.
In these days, with the adoption of Agile, DevOps, CI/CD, speed has increased, cycles are more complex, and ensuring the quality, functionality and usability of your applications earlier and more often is paramount. This transformation has placed demands on software testing all along the way. Worse, performing this modern development methodology and trying to incorporate existing (“old”) frameworks and practices has placed new requirements on teams who must evolve their test methodologies.
But of course, it isn’t easy. Issues with tests can come in many shapes and sizes and having a poor starting point or rushing, compounds the problem, increasing costs and risk exponentially. It’s difficult to appreciate the solution without truly understanding the problem. One size doesn’t fit all, and there is not one perfect “best-practice” solution that applies to all testing problems, including automating tests which is perhaps the most important part of testing.
The good news is some have found a way.
So, where do you start? For example, you may have chosen to automate your tests and need to learn how to create the right foundation. How do you deal with the challenges inherent with implementing the test automation best practices of reliability, portability, reusability, readability, maintainability, and more? And if you have embarked on automating tests for a short while, how to you help your team keep the faith?
In this article, get the answers to key questions and benefit from the discovery, understanding and implementation best practice processes of a long-time Parasoft Principal SQA Engineer and one of our experts Vinay Shah, as he recounts his real-life experience and shares his insight.
Below is a taste of that article. To read the full article, click here..
“Automation” is not a new buzzword in the industry. With the evolution of e-commerce and rapid access to mobile technology, delivering software applications as quickly as possible has been a trend for some time. But it’s difficult to appreciate the solution without truly understanding the problem. One size doesn’t fit all, and there is not one perfect “best practice” solution that applies to all automation problems. We must weigh the cost, effort, and risk against potential benefits.
There are tons of online resources about best practices for test automation that emphasize reliability, portability, reusability, readability, maintainability, and more. When I first started creating automated tests, I found this information helpful as well as stressful. How could it be practical to adopt all these practices for your tests from the get-go? If you are a test automation engineer, I’m sure you have faced some of these challenges as well at some point in your career.
Let me start with my journey of writing browser automation tests, then get into what I learned from my mistakes and how I overcame challenges.
Writing tests was initially time-consuming, and I was always trying to improve as I cycled through them during maintenance. Just like any other development task, creating tests also has deadlines and management expectations, and balancing these factors is crucial for success in a test automation project.
In order for my first project to meet the schedule, I rushed to create the tests and didn’t consider some of the best practices mentioned earlier. My tests were stable and passed 100% of the time—until the application under test (AUT) started changing a few months later. Now, the real quality of my tests came to the surface, and they became a maintenance nightmare.
Whenever a test failed, we spent lots of time trying to understand the cause of the failures so we could determine whether it was due to regression, an expected change in the AUT, or environmental issues such as a new browser or system updates. After weeks of troubleshooting and frustration, we spent some time to identify the issues that manifested from our tests.
To learn what they discovered, read the full article here on StickyMinds.
As Parasoft's Principal SQA Engineer, Vinay Shah, has over 18 years of experience working with software development and testing.