There’s nothing more frustrating for a developer than having to continuously rebuild things from scratch. A core principle in object-oriented design is being able to create an object or a referenceable point for every item of effort, so you never have to repeat yourself.
Despite this core principle, when it comes to mocking, developers regularly find themselves repeating the same process over and over again.
But why? When a developer is writing application code, they are often communicating with the same external APIs and making the same call to the same service in different ways. The problem with traditional mocks is that they are written at the code level and they are specifically designed to work with the function that is being developed. As such, every time that the functionality needs to be exercised, a new mock has to be created.
When using a traditional mocking framework, it is difficult to share mocks that have already created, not only because it may not be known where they exist in the code base, but also because it is difficult to understand to which requirement a specific mock is tied to. As such, what ends up happening is that individual team members often create the same mock as the person sitting right next to them. This is simply wasted effort and loss of a developer’s time.
It also becomes challenging to collaborate once a developer has created a mock. There is no magic dashboard that exists where you can post notifications on the mocks that have been created to keep the team informed.
I was recently at a healthcare organization that was using mocking as a common development practice and they had a service provider that was always going off-line, which made it a common target for mocking. As such, each of the individual developers had made a mocked interface for it in their own code base. They were all slightly different but achieved the same purpose. As I interviewed the developers, I discovered that about 20 of the same mocks existed. This was even a surprise to them. When asked about duplicated work, the answer, in hushed tones, was not completely unexpected: “We’re too busy to communicate.”
Sound familiar? (I wish I had a great statistic here to make you feel better.)
But mocks are necessary, as any developer or tester will explain, because you need to have the ability to decouple yourself from the rest of the world when doing your development. Mocks are a way to surround your application with a protectable environment — but the solution has its inherent challenges, including:
Enter: service virtualization. With this testing practice, you can simplify the process of mocking, and create a library of reusable virtual services that share core functionality. So you can stop creating virtual services over and over again.
Let’s look an example. Let’s say there is an existing service that provides information about a person’s identity by taking an incoming account number and returning a response for that person, and there’s a new virtual service that needs to be developed in which, based on an account number, it returns financial details.
With service virtualization, much of the original service can be leveraged when creating the new virtual service. The only thing that will separate the two services is the schema and the data. And as an organization builds more and more virtual services, the repository of artifacts that they can reuse becomes larger as well. This solves the initial challenge of having to create the same virtual service over and over again.
Unlike mocks, virtual services are highly shareable, and the internal modules can be reused as well. Virtual services, or pva files, can be stored as XML, and can easily be checked into source control. If the service simulates specific functionality for a particular API, you can search for the artifact either in source control or more easily on a shared virtualization server. As a team grows in their usage of service virtualization, they can leverage the existing server-sharing capabilities by connecting their desktop directly to the server searching for the artifact they require, pulling it right down to their desktop and immediately starting to use it. That solves the challenge of discovering virtual services that have been created and getting immediate access to them.
Parasoft Virtualize also provides a marketplace of both private and public artifacts built from common virtualization use cases. This allows you to get a quick start and build an internal knowledge base across your organization that simplifies the creation of virtual services going forward. As you start leveraging virtual services, you can easily tie that virtual service together with its initial API to naming conventions or through description or tagging.
Your development partners can then search for any virtual assets that have been created for the APIs they want to mock right in the web browser and see exactly what’s been created and immediately deployed to their desktop:
This solves the challenge of tying together virtual services with their specific APIs and requirements.
Finally, given all the above solutions, your team can build a sustainable workflow, allowing developers and testers to have options when they realize a mock is required. Instead of having to spend time going back and forth, they can query the Parasoft ecosystem for a mock to suit their specific needs, and if one exists they can get instant access to it. If not, they can create a virtual service that the team can reuse and will be discoverable by anyone who requires it in the future. This solves the challenge of collaboration in correlation.
You can use the free version of Parasoft Virtualize, the Virtualize Community Edition, to start collaborating with your virtual infrastructure. Everything I mentioned above is available there, and you can get going with the start of a download — assets can be checked into source control, promoted to a shared team server, and uploaded to your team’s private marketplace. Happy virtualizing!
A Product Manager at Parasoft, Chris strategizes product development of Parasoft’s functional testing solutions. His expertise in SDLC acceleration through automation has taken him to major enterprise deployments, such as Capital One and CareFirst.