Blazin’ A New Trail
I launched my second startup at the end of 2011. We had a great VC backing us and we knew we had a solution for developers that had been sorely lacking in the market. So, when BlazeMeter, a load testing cloud based on open source Apache JMeter came out of stealth and hit the ground running, we grew very quickly, actually, we grew much quicker than we anticipated.
In 2016, BlazeMeter was acquired by CA technologies where we were introduced to hundreds of Fortune-1000 companies that chose our solution. The journey had an unexpected turn of events in 2018 when CA technologies was acquired by Broadcom. I wasn’t sure if BlazeMeter would survive the second acquisition, but it did and I’m proud to report that BlazeMeter emerged as the lead product in the Continuous TestingBuilt on the premise of having developers testing software often, and in every phase of the software life cycle. Continuous Testing entails the automation of the following testing activities: Unit tests, Pre-integration, component level tests that run following every code commit, Integration tests, Acceptance tests, Post production tests, Type of tests can include, depending on availability and phase the following: Functional, Regression, Performance, Load, Stress, Soak and Security. Continuous Testing is often part of Continuous Integration/Delivery. It is used to identify failures early in the life cycle, allowing engineering teams time to prevent them from reaching production division of Broadcom. In retrospect, those 9 years were both intense and fulfilling for me and my current co-founders Alex Haiut and Andrey Pokhilko both who were a major part of BlazeMeter’s growth and scale.
Now the three of us have teamed up again to launch UP9, and here is why.
Going back (again) to 2015. BlazeMeter had really taken off. We had thousands of great customers like ESPN, GAP, NBC, NFL, DirectTV, Walmart, Target and many more.
Like other agile tech companies, BlazeMeter was constantly releasing new features. But with all of the ups, we had one major, ongoing down that became a dreaded, tiring ritual. Every major release brought with it the exact same pain point––features that once worked seamlessly would be rendered broken and an inevitable influx of angry customer calls would ensue.
This ritual put the company’s engineering team and its leaders under extreme pressure. Our teams were constantly firefighting in order to fix the regressions immediately. Our investment in testing (both in people and automation) kept increasing, while doing our best to balance the adverse impact on the release velocity, a process that kept getting worse. As our confidence in new releases weakened, we took precautions as many others did, by imposing release freezes to avoid down time during high season (e.g. Black Friday, Super Bowl, etc.).
Size Doesn’t Matter
At both CA technologies and Broadcom, all three of us – Alex, Andrey and myself worked with tens of thousands of enterprise customers through their Continuous TestingBuilt on the premise of having developers testing software often, and in every phase of the software life cycle. Continuous Testing entails the automation of the following testing activities: Unit tests, Pre-integration, component level tests that run following every code commit, Integration tests, Acceptance tests, Post production tests, Type of tests can include, depending on availability and phase the following: Functional, Regression, Performance, Load, Stress, Soak and Security. Continuous Testing is often part of Continuous Integration/Delivery. It is used to identify failures early in the life cycle, allowing engineering teams time to prevent them from reaching production journey. We helped companies that were on the digital transformation path by guiding them on how to shift testing to the left, adopt open source strategies and move away from the traditional paradigm of CoE (Center of Excellence) to become test enablers for their entire organization (we dubbed it CoE2.0 – Center of Enablement). We even received the highest product score from Gartner for Continuous Testing and API Testing.
But time and time again, we saw the same ritual with the customers we worked with – no matter what size a company was, we shared the same struggle:
- The testing workload and investment in testing grew over time
- Testing was and still is considered the main impediment to speed to market
- Companies still feel they are exposed in terms of reliability, no matter of how much testing they were doing.
Forward to 2020 and those concerns have not changed.
While larger, more established companies continued to grow their investment in testing, unicorns started to abandon testing altogether, and not because it wasn’t needed, but because of the ROI. It became too complex and too expensive.
Testing is Broken
As developers and engineering leaders, who have built both commercial applications and open source projects, and through our decade-long journey working with enterprise customers enabling them to shift testing to the left, we came to the realization that testing is broken and we decided to do something about it.
So, we set out to dissect the testing process down to the most granular details to see how we could rebuild it as a modern, autonomous alternative to traditional testing.
Testing is broken and as such, is adversely impacting release velocity. The tasks of test planning, test creation, test maintenance, test automation and results analysis are a heavy burden on developers and slowing down release velocity.
To improve software reliability at the pace of software development, we decided to…
Using two cups of machine learning and a dash of artificial intelligence, UP9 provides an out-of-the-box test automation for MicroservicesAn architectural style that structures a system as a federation of a relatively high number of services that are usually smaller in size, highly maintainable, testable, loosely coupled and independently deployable., and Cloud-NativeContainer-based environments such as Kubernetes or Docker. Developers use them to build and run applications with services packaged in containers and then deployed as Microservices through agile DevOps processes and continuous delivery workflows whether on-premise or on the public or private cloud.. Microservices that their testing is assigned to UP9 will be thoroughly tested continuously, replacing the need for developers to perform the aforementioned laundry list of test activities.
UP9 supports modern Microservice orchestration environments such as: AWS ECS, AWS EKS, Google GKS, Azure AKS, Openshift, as well as Kubernetes and Docker Enterprise in a cloud or an on-prem environment.
Here are a few of UP9’s features:
- Automatic generation and maintenance of CI-ready test-code, based on service traffic
- ObservabilityObservability provides deep insight into all layers of the IT infrastructure. Observability means gathering all fragments of numerous monitoring tools and then arranging them in a way that allows teams to form a picture of the current state of the system, helping IT operational teams to quickly solve critical problems. Observability analyzes complex systems, assessing why issues occur, what is causing them, how we can get ahead of them occurring and even fix them before they come to the surface. into API-contracts, business-logic and service architecture
- Automatic reliability, test-coverage and root-cause analysis
- Machine generated test-code covering functional, regression, performance and edge-case test-cases, covering ALL services and ALL service-endpoints
- The entire process is supervised by a real software-engineer-in-test
Why the Name UP9?
When we examined the testing practice, one of the obvious, but also eureka! moments was that testing is merely a means to an end. Engineers use pre-production testing to improve their software reliability. We understood that to create any sort of change, we needed to focus on the endgame – software reliability.
We built UP9 to help engineering teams in their quest to improve their software reliability while in pre-production, by preventing faults from reaching production and quickly detecting software failures in production.
Our goal was to help developers improve their software uptime and the 9’s that matter (go from 99.99 to 99.999).Big shout out and thanks to my friend – Barak Yagur for his input on the name .
We decided to focus on MicroservicesAn architectural style that structures a system as a federation of a relatively high number of services that are usually smaller in size, highly maintainable, testable, loosely coupled and independently deployable. for two reasons.
Our domain expertise is in the backend
From BlazeMeter load and API testing to Apache JMeter and Taurus, we’ve been building developer tools for backend testing for the last decade and we believe that Microservices is a natural fit for where our expertise can best add value.
We are intrigued with the problem
We believe that Microservices represents a new paradigm, completely different from the one that included architectures like MonolithA software architecture that encompasses the entire application business-logic in a single large codebase. Although it's considered the traditional (and sometimes the less advised) strategy, the Monolithic architecture holds many benefits and may well be the right choice for some engineering teams. and SOA, especially when it relates to Microservice orchestration tools such as Kubernetes.
We feel that existing tools that are great for testing Monolith and SOA may not be best suited to deal with the new paradigm. We firmly believe that any relevant solution must be built from the ground up and address the specific challenges presented by MicroservicesAn architectural style that structures a system as a federation of a relatively high number of services that are usually smaller in size, highly maintainable, testable, loosely coupled and independently deployable..
We were among the first to start the movement to shift-left-testing. It was a great first step……
“let developers own the quality of their software.”A quote from my friend and two-time customer, Antonio Almazzo.
…..but we hadn’t considered the workload it would add on developers.
We decided to focus all of our energy and experience researching and building a solution that not only provides a relevant and up-to-date solution to improve software reliability for Microservice, but one that offers a holistic solution that considers the developers offloading what used to be their testing workload and giving them precious time back.
(which is just another beginning)