Ephemeral architectures in which microservices scale up automatically to meet capacity requirements are a game changer. Whereas deploying container-backed microservices and orchestrating the environment they’re to run in used to be a laborious manual task, today’s technologies such as Kubernetes automate much of the process of packaging and deploying applications into the cloud.
But there is a problem: service management. One of the crucial tasks that needs to happen in a microservice environment is that microservices need to find one another and interact safely. Back when the number of microservices in play was minimal, it was possible to manually configure one microservice IP address to another and declare the operational behavior between them.
But that was then and this is now. Today, a single enterprise might have thousands of microservices in force. Many of these microservices will be created and destroyed as needed — which, by the way, is the nature of ephemeral computing. Continually fiddling with configuration settings manually to maintain reliable communication between an enterprise’s microservices is archaic.
Modern microservice management requirements exceed human capacity, and better solutions are needed. Enter the service registry and the service mesh.
Get TestRail FREE for 30 days!
Understanding the Service Registry
A service registry is a database that keeps track of a microservice’s instances and locations. Typically, when a microservice starts up, it registers itself to the service registry. Once a microservice is registered, most times the service registry will call the microservice’s health check API to make sure the microservice is running properly. Upon shutdown, the microservice removes itself from the service registry.
There are a number of open source service registry technologies available. The Apache Foundation publishes ZooKeeper. Also, Consul and Etcd are popular solutions.
The service registry pattern is pervasive. These days, most microservices are assigned IP addresses dynamically, so the service registry is the only way that one microservice can get the information it needs to communicate with another service. In fact, the service registry is so important that it serves as the foundation of the next generation of distributed computing technology: the service mesh.
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
Evolution to the Service Mesh
A service mesh is a technology that provides a layer of infrastructure that facilitates communication between microservices. Most service mesh projects provide service registration internally, as would be expected. But a service mesh adds functionality for declaring operational behavior such as whitelist security, failure retries and routing behavior.
For example, imagine a microservice that calls out to another microservice, such as time.jsontest.com
, which is outside the application domain. Unless the service mesh is configured to allow calls to the external resource at time.jsontest.com
, any service using the external resource will fail until access is granted. (Restricting access to external domains by default is a good security practice.)
The security capabilities alone are a compelling reason to use a service mesh to coordinate microservice behavior, but there’s more. Most service mesh projects can publish a graph of microservice connections and dependencies, along with performance all along the way.
There are many open source service mesh technologies available. One is Istio, which was started by developers from Google, IBM and Lyft. Linkerd is a service mesh project sponsored by the Cloud Native Computing Foundation. Consul, which is mentioned above as a service registry project, has evolved into a full-fledged service mesh product. These are just a few of the many service mesh products that are appearing on the distributed cloud-based computing landscape.
Why a Service Registry and a Service Mesh Are Important to Performance Testing
As working in the cloud has become more prevalent in the world of performance testing, so too will working with service discovery technologies such as the service registry and, particularly, the service mesh. The service registry and service mesh are not low-level pieces of infrastructure; rather, they are first-class players in the enterprise’s digital computing infrastructure.
In the old days, when applications where monolithic and everything lived behind a single IP address, all the performance tester needed to be concerned with was behavior at that IP address. Today, it’s different. There might be thousands of IP address in play that change at a moment’s notice, and there might be different operational and security policies in force at each address.
Performance testing is no longer about measuring the behavior of a single request and response at a single entry point. There will be hundreds, maybe thousands of points along the execution path to consider, so at the least, the tester is going to have to accommodate collecting performance data from throughout the ecosystem. The performance tester also is going to have to know at least enough about the workings of the service mesh in order to collaborate with DevOps engineers to set routing configurations for A/B testing and to determine environment optimization throughout the system.
Modern performance testing goes well beyond writing procedural scripts and collecting test results. Today’s digital ecosystem is intelligent. As systems become more autonomous and ephemeral, having a strong grasp of technologies such as the service registry and service mesh will go from being a nice-to-have skill to one that is required for testing professionals. The modern performance tester needs to be able to work with intelligent systems to design and implement comprehensive tests that are as dynamic as the systems being tested.
Article by Bob Reselman; nationally-known software developer, system architect, industry analyst and technical writer/journalist. Bob has written many books on computer programming and dozens of articles about topics related to software development technologies and techniques, as well as the culture of software development. Bob is a former Principal Consultant for Cap Gemini and Platform Architect for the computer manufacturer, Gateway. Bob lives in Los Angeles. In addition to his software development and testing activities, Bob is in the process of writing a book about the impact of automation on human employment. He lives in Los Angeles and can be reached on LinkedIn at www.linkedin.com/in/bobreselman.
Test Automation – Anywhere, Anytime