As a primary quality gate, unit testing:
Unit Testing processes can range from very simple ad-hoc or reactive efforts to highly-optimized efforts where policies are regularly updated as part of a root-cause analysis effort to prevent defects from being discovered in QA.
With ad-hoc unit testing efforts, developers independently choose to create and run unit tests while developing functionality, but tests are not saved or maintained. Instead, they are isolated on independent machines. Ad-hoc unit testing characteristics include:
Any pockets of maturity at this point are based on the experience and initiative of individuals. There is no centralization of assets; it’s every man for himself.
Tests and test artifacts are typically created as one-off solutions and may or may not be stored on a local machine. Tests are created without consideration of the business or use case.
Signs that it’s time to advance from this level include:
The next level beyond ad-hoc unit testing, is reactive, where the team needs to adopt more unit testing, often motivated by external factors such as standards compliance or a major quality problem. At this level, the team and management have committed to unit testing, but implementation is inconsistent across the organization. This level is characterized by:
At the reactive level, the value of unit testing is recognized, but inconsistent definitions of measurement diminish the value. Since adoption is incomplete, data isn’t shared across teams and it’s difficult to gauge completion or level of quality.
Moving beyond reactive testing, organizations realize the need to standardize the use of unit testing across the organization. A common unit testing policy is clearly documented, and teams recognize unit testing as a part of the development process. Developers are standardized on a testing platform to regularly create and extend unit tests during iterations.
This maturity level is characterized by:
At this level, organizations start to see real benefits from an organization-wide unit testing policy, usually in terms of a tangible decrease in serious defects. Increased visibility and traceability enables management to make better business decisions. Unit testing is institutionalized as part of the process and is expected development behavior.
At the managed level of unit testing maturity, organizations start to use a data driven approach to decision making. A metrics-driven policy increases visibility as a centrally-defined process manages unit testing activities. The development team now leverages the testing platform to distribute tasks directly to desktops based on coverage requirements, risk of failure, and other policy-defined metrics.
This maturity level is characterized by:
Test-driven development (TDD) becomes a viable option for driving code quality as the value of unit testing increases. Change-based testing becomes a reality because the cost of change is known in advance.
In addition to the increase in quality and security of their software, organizations can start to use the data collected during development to drive better decisions. The organization seeks to leverage the merged and correlated data to perform advanced analytics that identify application hotspots.
At the optimized level of unit testing, there is an organizational focus. Policies are regularly updated as part of a root-cause analysis effort to prevent defects from being discovered in QA.
Unit tests are a verification mechanism to verify that policy and process are in sync. Test results are linkable and bi-directionally traceable to all data associated with software and device development. Traceability extends beyond the traditional borders of the SDLC.
Unit test policy is seamlessly integrated into a controlled quality, security, performance, and reliability framework, orchestrated from a centralized interface and inclusive of both development and non-development systems. True Business Intelligence is achieved.
Developers, testers, or managers kick off a test run based on any combination of technical and business requirements. The system automatically appropriates the needed environments, VMs, and tests, then provides results as part of a customizable business intelligence layer.
This is just a brief introduction to the levels of unit testing maturity. Most organizations today fall somewhere on spectrum of this maturity model.
Parasoft, the leader in software test automation, has developed a unit testing maturity model that provides a detailed look at the 5 different levels of unit testing: Ad-hoc, Reactive, Proactive, Managed, and Optimized.
If you want to assess where your organization currently stands and see what’s involved in moving forward, download the complete Unit Testing Maturity Model.
Parasoft’s industry-leading automated software testing tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way.