What is a service mesh?
A service mesh is a tool that extracts the complicated logic governing how services communicate in the cloud to a separate layer of infrastructure. This is especially critical for microservices-based applications, which are made up of a network of services, each performing a specific business function.
In order to ensure users never experience downtime or slowness, the connections between services running in different containers must be secure, resilient, and observable. The overarching goal behind this is rock-solid stability at speed; being able to rapidly deliver a turnkey implementation that retains the core cloud-native benefits of scalability and agility businesses need to innovate. And with network concerns handled by a service mesh, developers can focus on business logic.
The origin story of the service mesh is born from the rapid expansion of microservices-based applications. Consider a traditional application, running many services on a single platform. There’s a single place to monitor all services, and they can easily call each other because the address is always local.
But in order to make a change to a single service, the entire application has to be redeployed. Changing one service can break many others that depend on it, requiring close developer coordination and extensive testing cycles prior to releases. And the application limits the tools and languages available to developers, reducing their delivery speed. This lack of flexibility becomes a severe weakness when you need to quickly create new innovative products for competition.
In addition, traditional applications are limited in terms of scalability; you can’t increase the capacity of a single service without increasing the capacity of the application as a whole. In order to be ready for performance spikes during seasonal fluctuations, businesses often build a large, expensive infrastructure that’s only used for 5 days a year.
This is what led to the advent of a microservices architecture. In this new way of building applications, small teams build a network of services, each performing a specific business application - like inventory lookup or payment processing. Services run independently, but they communicate with each other to request data and implement application logic – for example, the shopping cart service will need to communicate with the payment processing service to complete a purchase. Being able to control individual parts of an app independently makes it easier to modify quickly. If you need to add a better search engine, you can just replace the microservice that does searching, for example, without going through a full release cycle for the entire application. Scalability is a matter of replicating individual services that require more capacity, while leaving the rest alone.
Cloud-native applications are based on this microservices model, leveraging the on-demand availability of hardware in the cloud from AWS or Azure to quickly spin up new services as needed.
However, with an application split up into dozens or hundreds of pieces, getting control over it is more challenging. The logic governing how those pieces communicate becomes much more complex in the cloud, where security risks abound and networks can be unstable. In order to ensure that users never experience downtime or slowness, the connections between each piece must be carefully designed so the application is resilient. One level of security is rarely enough to guard against hackers and protect private data; and as new, stronger security options become available, businesses want to adopt them as soon as possible. And writing code to do all of these things for each piece is not sustainable as your app continues to grow and gain adoption.
Enter the service mesh. A service mesh helps with these basic network-level functions by extracting them to a separate layer of infrastructure.
Once the service mesh was created, app developers rejoiced. With a separate layer of infrastructure in place to take control over the many functions made possible be microservices, new and exciting use cases began emerging. While the possibilities are endless, image for a moment just a few:
Separating these functional considerations from the goals of the application makes it easier to maintain and to operate.
While early service mesh innovators like Netflix may have started with localized, custom solutions, today the most common pattern in service mesh providers is to deploy small proxies – also called “sidecars” because they’re like a motorcycle sidecar that rides along with the main service – that run side by side with each microservice and report back to a central management console. As web traffic passes between services, each proxy has clear instructions for routing, throttling, and security that are applied locally, like traffic signs and lights on street networks, to ensure online vehicles get to their destinations. This rapid, real-time direction and control optimizes the responsiveness of the application.
However, the transition to successfully deploying microservices-based applications goes beyond service mesh; there are other tools and concepts that are an essential part of the new landscape. Docker and OpenShift bundle software into easily deployable packages managed in Kubernetes clusters together with service mesh. And no modern method of application deployment is effective without implementing DevOps, a philosophy that merges development and operations into a single team that owns the entire software application lifecycle.
Risk mitigation: The cloud is a zero trust environment. From a network security perspective, each public microservice is potentially the weakest link, risking your customers’ data and your reputation. That’s why being able to apply comprehensive security policies from a central location and update them easily is critical.
Resource cost: Writing cloud-native applications requires expert developer skills; retries, timeouts, logging, application health are all critical for operating in the cloud. But they don’t provide any new features for your users. And by writing them yourself, you run the risk of introducing errors. Eliminate that work by handing it off to service mesh and you’ll recover many hours of developer resources.
Resilience: The massive adoption of mobile apps available anytime has raised expectations for every application to 100% uptime. And it’s becoming even easier for consumers to switch if they have a poor user experience. Slowness and downtime are no longer acceptable – which is why a resilient, reactive approach to application development is essential. Service mesh can reroute requests and provide traffic throttling to protect your revenue.
When it comes to applications, service mesh doesn’t go far enough. Its focus is on the network connectivity between services and not on the application itself. As applications gain adoption, they may need to scale rapidly and add new features to meet business needs; in fact, your application may change quite a lot. Any changes that require coding will slow your progress. And it can be difficult to publish and reuse microservices across teams, without any kind of central visibility and governance.
It’s clear that as applications evolve, being able to personalize, enhance, and control services without code is how you can scale and develop faster. Instead of a service mesh, you need an application mesh—webMethods AppMesh.
AppMesh consists of a set of lightweight, powerful microgateways that are controlled by a central API management platform. It gives you visibility into the behavior of both your users and your microservices at the application level. And you can easily reuse and govern those microservices just as you do with APIs. It’s a sophisticated service mesh architecture that gives you the ability to modify the behavior of your services in realtime and apply advanced user authorization policies with the touch of a button.
And instead of adding yet another tool to your landscape, webMethods AppMesh is embedded in your API management layer so you can manage not only APIs, but microservices and service meshes from a single place. This is the foundation for developing cloud-native-applications that leverage microservices and APIs.