As a CEO, I care deeply about my company’s services' availability and functionality.
As a startup, we are constantly changing the architecture, adding services and changing service-endpoints and behaviors.
While monitoring the basic health signals is fairly simple and can be done by a variety of tools, monitoring my services' functionality and making sure that there are no functional failures is a pretty cumbersome job. It usually requires building functional tests in a programming language and then updating these tests whenever my service architecture changes. That can be really hard work! I needed a quick and easy to use solution to help me.
One of the use cases supported by UP9 is scriptless, automatically updating (self-healing) production testing that can also be used for functional monitoring.
I decided to use UP9 to monitor UP9’s availability and functionality.
Here is what I set out to achieve:
- Get started in a few minutes without help from developers
- Create a comprehensive test-suite that provides complete test-coverage for a few major workflows
- Auto update the test-suite when my service architecture changes
- Alert on failures
Scriptless, Synthetic Functional Monitoring Test-suite
To create the initial synthetic monitoring test-suite, I used the UP9 CLI which will launch a Puppeteer browser window that I'll use to record a few major workflows by simply using my application the way a user would.
up9 tap:start ag.up9.app.072101
The command opens a Puppeteer window, which I used to browse the UP9 production environment.
Observability in TEST1
When I go to my UP9 account, I can see my services, service-endpoints and related business logic which are quite familiar to me.
Automatically Generated Test-code that Covers the Business Logic
The ML model includes the service business logic that appears in the form of a graph with service-endpoints as nodes and dependencies as edges.
UP9 automatically generates test-code that traverses the business logic graph providing complete test-coverage.
The machine-generated test-code includes the API sequence, variable extraction and result assertions. All are automatically discovered without any user assistance. I can go in and customize the fail/success criteria to make it strict or relaxed, or modify the criteria.
Test Execution and Results Analysis
After a few minutes of customization and 'dialing in' the test-code, I ran it and then viewed the results in a 'single pane of glass'. I could see my system’s, services’ and endpoints’ reliability:
And in case there are failures, I can see what led to these failures (aka Root Cause Analysis):
Service Architecture Progression
As I mentioned earlier, as a startup everyday things change. Maybe there is a new service, or a different behavior from existing service-endpoints.
Traditionally, when the service architecture changes (aka Architecture Progression), old tests would break and I would have to update the tests or even regenerate them from scratch.
UP9 generates an architecture progression report that can be used to monitor my service architecture progression and see the changes:
Whenever UP9 identifies architecture changes, the test-code is automatically updated to match the up-to-date architecture and business logic.
Self-healing synthetic monitoring of service functionality is just one of the use cases of the modern test-automation provided by UP9.
UP9 provides an out-of-the-box test automation for microservices, kubernetes and cloud-native, replacing the need for developers to constantly build and maintain tests, while providing comprehensive service test-coverage.
- Automatic generation and maintenance of CI-ready test-code, based on service traffic
- Observability into API-contracts, business-logic and service architecture
- Automatic reliability, test-coverage and root-cause analysis
- Machine generated tests include functional, regression, performance and edge-case test-cases, covering all services and all service-endpoints
UP9 offloads the microservice testing workload from developers giving them precious time back.
Observability-in-Test is UP9's defenition of the collection of Observability artifacts in the context of TEST. I.e. the information a software engineer in test or a developer performing tests care about in the context of testing.↩