Make manual regression testing faster, smarter, and more targeted. See it in Action >>
Service virtualization is essential for testing early. It uncorks a major bottleneck in the testing process and helps make shift-left and continuous testing possible. But the downside is that creating virtual services takes time.
Organizations new to the practice also need a way to demonstrate quick wins that inspire teams to adopt it. If presented correctly, your team can quickly see that service virtualization is an indispensable strategy for overcoming constraints in their test environments.
This guide introduces a low commitment, hands-off, approach to service virtualization that can automatically fail-over to simulated API responses based on continuous recording.
A core component of service virtualization that makes this possible is the message proxy, which acts as a man-in-the-middle between an application under test and a downstream service.
Message proxies are used to monitor and record traffic, as well as direct the flow of traffic between real and virtual endpoints. One of the advanced features of the message proxy in Parasoft Virtualize is Learning Mode. It continuously learns from recorded traffic and maintains a simulation of the responses it has seen to mitigate scenarios like the real service going down or reducing costs from pay-per-transaction dependencies.
For situations where advanced logic or fine-grained control of the virtual responses is unnecessary, Learning Mode eliminates the time commitment of creating and maintaining virtual services.
Ready to get started with service virtualization? Request a Demo »
Unlike use cases where developers and testers are prototyping or simulating a service for local testing, Learning Mode is best applied in shared test environments. Ones where unstable, or costly, downstream dependencies affect teams’ abilities to test.
The entire process of virtualizing a service with Learning Mode often takes less than an hour. With the help of an Infrastructure or DevOps engineer familiar with deployments into your test environments, it can go faster than you expect. The key steps are:
The following diagram describes how the Virtualize message proxy fits into a system’s architecture:

Parasoft’s flexible server-based deployments are ideal to infuse service virtualization into your existing test environments, with options that include:
The primary components of the solution are:
Virtualize Server
This is the server where message proxies and virtual assets get deployed, to be integrated into your test environment.
Continuous Testing Platform (CTP)
This is a web-based administration and user portal for Parasoft Virtualize.
Virtualize Desktop
This is a service virtualization desktop application with a user-friendly UI and AI Assistant chatbot for power users working on virtual services with more sophisticated requirements on response behavior.
This guide assumes you have already deployed and licensed Parasoft CTP and Virtualize server. It also assumes you have identified an application under test with a downstream API dependency that you would like to virtualize.
Note: Networking is a key pre-requisite for injecting endpoints hosted by Parasoft Virtualize into your test environment. Prior to proceeding, it is important to check that the appropriate ports to your Virtualize server are open (default: 9080/9443) and that your Virtualize server can make outbound connections to other services in your test environment. Furthermore, some test environments enforce HTTPS where the self-signed certificate Virtualize ships with is insufficient; in these cases, you may need an SSL certificate generated for the Virtualize server.
The first step in the process is creating a message proxy on your Virtualize server. This can be done with Parasoft CTP using a web browser.
After logging in to CTP, you will see the Environment Manager Workspace page.
Click the Environment Manager menu on the top left and select Service Virtualization.
You will be redirected to a page that provides a thin-client interface to your Virtualize server.
Right click on the Virtualize server you have connected to CTP and select Create Message Proxy.
Give the message proxy a name and click Save.
You will see your new Message Proxy added as a node under your Virtualize server. Right click the node and select Add HTTP Connection.
This is where you will configure the message proxy, whose listener endpoint will be injected in-between your application under test and service to be virtualized.
Define a Proxy listen path for your message proxy. This completes the message proxy’s endpoint that you will use to replace the endpoint of the service to be virtualized in the deployment configuration of your application under test.
Next, enable the Use fallback connection checkbox and fill out the form fields for host, port, and path. Use host.virt.internal as shorthand for the Virtualize server host. This configures a secondary connection where the virtual asset will be automatically deployed. When the real service becomes unavailable on the primary connection, the message proxy will fail-over to the secondary connection that points to the virtual service deployed on the Virtualize server.
Click Save, and then right-click the message proxy node and click Enable.
The message proxy on your Virtualize server is now active and ready to be integrated into your test environment.
At this point, an Infrastructure or DevOps engineer will be needed to help you re-configure the deployment of your application under test so that it points to your message proxy endpoint instead of directly to the service you want to virtualize.
The example in this guide is based on the Parabank demo application, which has a convenient admin web page for dynamically switching the downstream API the application depends on.
Keep in mind the following steps are specific to the Parabank demo application, and most application deployments have a properties file or secret that defines external connection endpoints. This is where a one-time change needs to be made so that upon re-deployment the application will then talk to your Virtualize server’s message proxy as a passthrough to the downstream service.
Select the message proxy you created under the Virtualize server and copy the Proxy Connection Settings.
In the case of Parabank, the Admin Page lets you conveniently reconfigure the REST Endpoint it depends on without needing to change a configuration file and redeploying the application.
Before letting the IT or DevOps engineer go, you want to confirm that the changes made to the deployment of your application have not resulted in any noticeable change of behavior. It should now be directing its downstream API traffic to your message proxy, and the proxy should be forwarding those requests along to the real service your application depends on. Monitoring the traffic of the message proxy is a convenient way to make sure everything is working as expected.
From the Service Virtualization page in CTP, navigate to Events.
Find your message proxy in the list of deployments on the Virtualize server, and make sure both the checkbox and monitoring icon are enabled.
You may see some Event Messages already, but for the time being click Clear to filter the log into a clean slate before you begin testing the messaging flow between your application under test, the message proxy, and the downstream service.
At this point, go ahead and exercise your application under test and then return to the Events page in CTP to view the monitored traffic. In the case of Parabank, we will login.
Coming back to the CTP Events page, we see a notification that new event messages are available.
There will typically be a pattern of 4-5 logged messages relating to a request-response interaction.

Request Received
The proxy receives an incoming request.
Proxy Request Sent
The proxy forwards the request to its destination endpoint.
Info
In the case of learning mode, an event gets logged when the proxy records the traffic to disk.

Proxy Response Received
The proxy receives a response from the destination endpoint.
Response Sent
The proxy forwards the response back to the client application.
There are two use cases with the learning mode message proxy.
When the primary connection is set to the real service, your application primarily depends on the real service while testing. This mode is the closest to how you were testing before service virtualization. Except now when there is instability with that downstream service, you are no longer blocked from testing because a virtual service is filling in until the real service becomes available again.
When the primary connection is set to the virtual service, your application primarily depends on the virtual service while testing. When transactions with that service have a cost, this mode can be very advantageous in reducing how much money is being spent to support testing.
The most common challenge is connectivity. These issues usually fall into 3 categories:
Monitoring the event messages from your message proxy can be very helpful in troubleshooting connectivity issues.
For example:
Service virtualization setup with the learning mode message proxy is designed for simple correlation between requests & responses. If you find that the virtual service is failing over to the real service more than you think it should, or is returning stale data, then there is some customization available. When you download the virtual asset generated by learning mode into your Virtualize Desktop workspace, you can configure exclude patterns for request message correlation:

There may be request parameters that are irrelevant to the recorded response that you would like to exclude from correlation, generalizing the cases when an incoming request correlates to a recorded response.
Or you may have requirements for the virtual service that are more sophisticated, for example:
These are good examples where your needs exceed what the learning mode virtual service can provide. Learning mode is ideally suited for very fast setup where very little time is invested in creating and maintaining virtual services. It can provide a lot of value very quickly when the use case is fairly static. Advanced use cases like the ones above are easy to implement but require the virtual assets to be built in Virtualize Desktop and then deployed on the Virtualize server. Parasoft Virtualize allows for a lot of flexibility. You can even create assets that support these advanced use cases and then use the learning mode responses as a catch-all before finally failing over to the real service.