In this Q&A with Parasoft, we asked Frank Jennings, TQM Performance Director at Comcast, to share his service virtualization experiences using Parasoft Virtualize. “Service virtualization,” he said, “has allowed us to get great utilization from our testing staff, complete more projects on time, and also save money by lowering the overall total cost of performing those tests for a given release.”
For more, read on.
Parasoft: Why did Comcast explore service virtualization?
Frank Jennings: There were two primary issues that led Comcast to explore service virtualization. First, we wanted to increase the accuracy and consistency of performance test results. Second, we were constantly working around frequent and lengthy downtimes in the staged test environments.
My team executes performance testing across a number of verticals in the company — from business services, to our enterprise services platform, to customer-facing UIs, to the backend systems that perform the provisioning and activation of devices for subscribers on the Comcast network. While our testing targets (AUTs) typically have staged environments that accurately represent the performance of the production systems, the staging systems for the AUT’s dependencies do not.
Complicating the matter further was the fact that these environments were difficult to access. When we did gain access, we would sometimes impact the lower environments (the QA or integration test environments) because they weren’t adequately scaled and could not handle the load. Even when the systems could withstand the load, we received very poor response times from these systems. This meant that our performance test results were not truly predictive of real world performance.
Another issue was that we had to work around frequent and lengthy downtimes in the staging environments. The staging environment was not available during the frequent upgrades or software updates. As a result, we couldn’t run our full performance tests. Performance testing teams had to switch off key projects at critical time periods in order to keep busy– they knew they wouldn’t be able to work on their primary responsibility because the systems they needed to access just weren’t available.
Parasoft: How did this impact the business?
Frank Jennings: These challenges were driving up costs, reducing the team’s efficiency, and impacting the reliability and predictability of our performance testing. Ultimately, we found that the time and cost of implementing service virtualization was far less than the time and cost associated with implementing all the various systems across all those staging environments — or building up the connectivity between the different staging environments.
Parasoft: Did you consider expanding service virtualization beyond performance testing?
Frank Jennings: Yes, the functional testing teams sometimes experience the same issues with dependent systems being unavailable and impeding their test efforts. They’re starting to use service virtualization so that they can continue testing rather than get stuck waiting for systems to come back up.
We’re currently in the process of expanding service virtualization to the functional testing of our most business-critical applications. We’re deploying service virtualization not only to capture live traffic for those applications, but also to enable functional testers to quickly select and provision test environments. In addition to providing the team the appropriate technologies and training, we’re taking time to reassure them that their test results won’t be impacted by using virtual assets rather than live services.
Parasoft: In your opinion, what is the key benefit of service virtualization?
Frank Jennings: The key benefit of service virtualization is the increased uptime and availability of test environments. Service virtualization has allowed us to get great utilization from our testing staff, complete more projects on time, and also save money by lowering the overall total cost of performing those tests for a given release.
Parasoft: If you could start all over again with service virtualization, what would you do differently?
Frank Jennings: I think things would have run more smoothly if we had a champion in place across all teams at the beginning to marshal the appropriate resources. The ideal rollout would involve centralizing the management and implementation of the virtual assets, implementing standards right off the bat, and using the lessons learned in each group to make improvements across all teams.
Parasoft: Any other tips for organizations just starting off with service virtualization?
Frank Jennings: Make sure that your virtual assets can be easily reused across different environments (development, performance, system integration test, etc.). It’s really helpful to be able to capture data in one in environment then use it across your other environments. Obtaining data for realistic responses can be challenging, so you don’t want to constantly reinvent the wheel.
Also, don’t underestimate the amount of education that’s needed to get the necessary level of buy-in. For each team or project where we introduced service virtualization, we needed to spend a fair amount of time educating the project teams and business owners about what service virtualization is, what business risks are associated with using it for testing, and how the system proactively mitigates those risks. People are understandably nervous when they hear that you’re removing live elements from the testing environment, so some education is needed to put everyone at ease.
Want to learn more about service virtualization at Comcast, including how it saved them about $500,000 and helped them reduce downtime by 60%? Read Service Virtualization, Performance Testing and DevOps at Comcast to learn what results Comcast has been able to achieve after approximately 3 years of service virtualization—and why service virtualization is now a key component of their DevOps initiative…
Parasoft’s industry-leading automated software testing tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way.