The Software Development Life Cycle (SDLC)
We usually think of the software development life cycle for any piece of new code as a vector progressing to the right, all the way from design through development, and integration and deployment to production. The life cycle usually ends when the piece of software is replaced by a new version and the cycle continues.
While the SDLC is intentional and the essence of our work, there is another life cycle to consider - the one of a production failure.
The Production Failure Life Cycle
With software development we often inadvertently introduce faults and defects that become production failures - they too have a life cycle.
Fault is defined as an abnormal condition or defect at the component, equipment, or sub-system level which may lead to a failure.Wikipedia .
A fault can result from a defect (e.g. bug) or an abnormal condition (e.g. poor configuration) that may lead to a production failure.
Faults are detected by running tests at different stages of the SDLC (e.g. via unit, integration or acceptance testing).
Traditionally it was the responsibility of QA engineers to find faults early. Today it has become much more common that developers are also responsible to detect faults early.
Failure is the state or condition of not meeting a desirable or intended objective, and may be viewed as the opposite of success.Wikipedia .
A failure in production is considered expensive as it is likely to disrupt the normal business operation and cause monetary loss.
The Cost to Detect a Fault in Pre-production Vs a Production Failure
The cost of a production failure is significantly higher than the cost of detecting a fault in pre-production. We would much rather detect faults early, in pre-production long before they spiral into production failures.
Defects, bugs, faults or failures are not new. They are part of software engineering and we use techniques such as unit, integration, acceptance, load, stress and chaos testing as well as monitoring, to try to detect and prevent them from happening, contain their 'blast radius' or recover fast once failures occur.
The Modern Test Pyramid
If we consider the traditional testing pyramid and the nature of the Microservice architecture, we find that testing has reduced in scope.
Unit testing is our first reliability gate, however developers get stuck fairly quickly with Unit testing mostly due to service dependencies and independent service roadmaps.
While there are practices to support service-testing (e.g. mocks, API-contract testing), they are very complex and costly to implement, resulting in very little service-testing done in spite the rising number of services as part of a modern architecture.
Most modern engineering teams reduce the scope of testing to Unit testing and End-to-end testing as the more common testing techniques, while component, service and integration testing are becoming harder and harder to implement.
While gaps are forming in the testing pyramid, new techniques are emerging that increase the confidence, like Canary and a stronger reliance on monitoring and fast rollbacks.
Microservices Testing is Hard
Microservice architecture is relatively new and complex, especially when combined with tools such as Kubernetes and Docker that have great influence on the system behaviour.
While testing a single service in a Microservice architecture is theoretically easy, it really isn’t, since services are typically part of a larger business-logic dependent on other services to complete the logic. As mentioned before, the existing practices to support service-testing (e.g. mocks, API-contract testing), are complex and costly to implement.
The testing challenge exacerbates when you bake in the amount of microservices you’d like your testing strategy to cover.
Overall what was supposed to be a federation of lightweight de-coupled and testable services, become very challenging to test and prevent faults from reaching production.
Automatic Faults Detection
UP9 introduces a new and automatic testing technique to further increase the confidence of deployment by automatically detecting faults early on in pre-production and preventing them from becoming production failures, by filling in the gap between unit testing and end-to-end testing as well as providing additional layers of protection once in production.
By launching active protection layers that provide API test-coverage in multiple stages (e.g. Integration, Canary, Production), UP9 can detect faults early on, and prevent these faults from becoming production failures.
The machine-generated API test-coverage is instant and self-updating supporting a wide scale of distributed microservices that change rapidly with test-coverage that increases over time.
UP9 is a microservice testing platform for Cloud Native systems that can scale across any number of services. UP9 helps developers prevent software regressions and increase engineering productivity through effortless testing and instant virtualization.
- Understand how services interact with each other with automatic contract discovery.
- Ensure system-wide contract adherence with tests that write themselves.
- Start testing early with test environments as code, automatically mocking all service dependencies.
UP9 offloads the microservice testing workload from developers, giving them precious time back.
You can sign up now for free!
Join our webinar about contract testing:
"How Contract Testing Helped Cycode Prevent Software Regressions in their Kubernetes and Kafka Environments". Feb. 17th 11:30 ET / 18:30 Israel.
Sign up now.