Learn how common microservices architecture patterns are used in software development and how to choose the right pattern for different scenarios.
An In-Depth Guide to Microservices Architecture Patterns
Microservices Architecture Patterns: A Detailed Breakdown
Microservices architecture has become the methodology of choice for modern application development. Although it can solve many problems, though, it’s not without its challenges. That’s why it’s necessary to understand the common design patterns so you can tackle any orchestration issues with ease.
Let’s take a closer look at microservices architecture patterns, what they are, and how each one is used.
Understanding microservices patterns
Microservices design patterns are software design patterns that generate reusable autonomous services. The aim is to allow developers who use microservices to accelerate application releases and deploy each microservice independently if needed.
Different microservices patterns each have their advantages and drawbacks. This graphic demonstrates various design patterns for microservices:
Each service is fully autonomous and full-stack. Changing a service implementation won’t impact other services because they communicate using well-defined interfaces.
Microservice design patterns
Let’s now examine some of the popular microservices design patterns and how they may be of use to developers.
In the world of computing, an “aggregator” is a website or program that gathers and displays related pieces of data. In microservices patterns, it’s a fundamental web page that calls on a variety of services to obtain the necessary data or achieve the required functions.
The source of output is divided on breaking the monolithic architecture into microservices. This pattern can be useful when a developer seeks an output by combining data from multiple services.
If there are two services, each with its own database, an aggregator with a unique transaction ID can collect data from each, apply the business logic, and publish it as a REST endpoint. Then, the data collected can be handed over to whichever respective microservices require it.
An API gateway is a variation on an aggregator service. Microservices are known as “the enabler of APIs” in DevOps. Gateways can be a solution to problems such as how to request information from multiple microservices and how to handle several protocol requests at once.
An API gateway can resolve not only the above concerns but other issues that may come to light. This microservice design pattern can be considered as a proxy service to route a request to the concerned microservice.
An API gateway can send requests to multiple services and, in the same way as an aggregator, send the results back to the composite or consumer service. It can also be the entry point for all microservices. This creates fine-tuned microservice-based APIs suited to different types of clients and their needs.
Put simply, API gateways are able to convert protocol requests from one type to another with the assistance of the API gateway design pattern. They can also offload the microservices’ responsibility for authentication and authorization.
Picture this scenario. A client sends a request. This is passed onto the API gateway, which acts as an entry point. It then forwards the request to the appropriate microservices. After this, a load balancer distributes the request to the service.
Microservices are thus able to communicate with each other via a stateless server i.e. either by HTTP request or message bus.
An asynchronous microservices design pattern calls for all services to communicate with each other. However, this doesn’t have to be sequential. You can use various microservices software architecture patterns and techniques to create this.
So, out of three services, the client’s request may be sent to two simultaneously. These requests will be put in a queue. The request can also be sent to the other service, but it doesn’t require the response to be sent back to the service it came through.
Asynchrony can be achieved, but this is carried out in an application-specific way. Some microservice architectures may choose to use message queues instead of REST requests/responses because of this.
Backend for frontend
Backend for frontend, also known as BFF, is a variant of the aforementioned API gateway pattern. It provides an additional layer between the client and microservices.
It isn’t a single point of entry. Instead, BFF introduces multiple gateways for each client. With this approach, you can add an API tailored to their specific wants and needs. This can remove much of the hassle caused by keeping this all in one place.
Note that this pattern may be extended for particularly complex apps. Different gateways can also be created for certain business domains if needed. Developers often prefer this model as it’s flexible and can be used to respond to most microservice situations.
A backend for frontend approach provides extensive business capabilities. Not only this, but it’s ideal for the continuous delivery of microservice architecture on a large scale.
However, this doesn’t mean every microservices-based architecture should use a BFF pattern—as we’ve discovered, not every pattern resolves every situation. The more complexities there are in the design, the more setup is required.
Remember when creating a variety of apps that you may want to choose a more complex communication pattern than you need to enable scaling up in the future.
As we’ve seen, a microservice might need to pull data from multiple sources, which can impact data consistency. Sources of data can include other microservices, so the branch microservice pattern is a mix of aggregator and chain design patterns.
What this means is it can accept requests and responses from two or more microservices at the same time. Additionally, branch patterns can be used to invoke different chains of microservices, or a single chain, based on the business’s needs.
As many microservices fall under the umbrella of one application, there is always a risk of latency and downtime. A blue-green deployment strategy can be implemented to reduce the likelihood of this.
How? By running two identical production environments: blue and green. In this example, green is the existing live instance, and blue is the new version of the application. Only one of the environments is live at any time, with the live environment serving all production traffic.
Many, if not all, cloud-native architecture platforms will provide options for implementing a blue-green deployment. This is beneficial as downtime can greatly impact a business, from lessening its productivity to making it less profitable.
A chained microservice design pattern produces a single response to a client’s request. For example, a request may be received by service one. This then communicates with service two. In turn, number two could be communicating with service three. These services will likely all be using synchronous HTTP request/response messaging.
When remote calls fail or hang without a response until a timeout is reached, the solution is a circuit breaker. If you have a lot of callers and a supplier who doesn't respond, you might run out of important resources, which could cause failure across the applications' systems.
This is when a circuit breaker pattern comes into play. The circuit breaker trips when the number of failures reaches a certain level, and any subsequent calls to the breaker result in an error, a different service, or a default message instead of the protected call being made.
Client-side UI composition
When services are developed by breaking down business capabilities, dependencies, and subdomains, the services responsible for the user experience have to pull data from several microservices.
Previously, there used to be just one call from the UI to a backend service to collect all data and refresh or re-submit the UI page. Nowadays, however, this isn’t the case.
With microservices, the client-side UI has to be designed as a skeleton with multiple sections. Each of these areas will make a call to an individual backend microservice to pull up the data on the screen. Frameworks like AngularJS and ReactJS can help developers do this.
These screens are known as single-page applications or SPAs. Each team of developers creates a client-side UI component that handles the region of the page/screen for their service.
A client-side UI composition team is then in charge of implementing the complete page skeletons. They do this by composing multiple, service-specific UI components.
The command query responsibility segregation (CQRS) pattern can be useful if you have a large application reading data from an event store. It separates read and update operations, which can otherwise be difficult when processing all entities but only fetching a single one.
This separation of concerns allows the software development team to adapt models that are more manageable and offer greater flexibility.
Because of the flexible nature of this design pattern, it can be beneficial for systems that evolve over time. A CQRS implementation can help enhance microservice application performance, safety, and scalability.
The CQRS pattern is preferable in cases where the number of data reads is greater than the number of data writes. It’s also effective in scaling the read model separately.
Database per service
In some use cases, one database per microservice must be designed. This should be private to that service. It’s thus accessible by the microservice API only and can’t be directly reached by any other services.
For instance, for relational databases, you can use private tables per service, schema per service, or database server per service.
A shared database may be preferable if the application is a monolith and trying to break into microservices. Although a shared database isn’t always ideal, it’s the only solution in some cases.
There are those that consider this an anti-pattern for microservices, but it’s a good start if you’re looking to break the application into smaller, more logical pieces.
In a microservice architecture, requests may span multiple services. Microservices are the intersection of DevOps and legacy IT. Each service deals with a client request by performing one or more operations across multiple services.
While troubleshooting, it’s worth having distributed tracing. Using trace ID, developers are able to trace a request end to end.
When dealing with requests, microservices need to exchange data, particularly with databases per microservice. For stable, highly scalable systems, they should communicate asynchronously by exchanging “events”.
In some conventional databases, the business entity with the current “state” is directly stored. When it comes to event sourcing, any state-changing event or significant events can be stored in place of entities. This means changes are saved as a series of immutable events.
The state of a business entity can be deducted by reclaiming all the events in it. Numerous services can replay events from the event store to determine the correct state of their individual data stores (since data is stored as a series of events rather than by making direct updates to stores).
External configuration can be employed to avoid code modification. By externalizing all of the configuration, such as endpoint URLs and credits, the application can load them either at the start or whenever it’s necessary.
For environments like Dev, UAT, and QA, the endpoint URL or some of the configuration qualities may differ. When this happens, a re-build and re-deploy of the service might be required.
Once a developer has implemented a microservice architecture pattern, there’s a chance the service might not be able to deal with the volume of transactions. Each microservice needs to have an endpoint that can be used to check how well the application is performing, such as health checks.
This API should be used to check the status of the host, the connection to other services and infrastructure, and any other specific logic.
Each microservice generates a log file. This can be useful in cases where an application may consist of several services. Requests often require multiple service instances.
There needs to be a centralized logging service that can compile logs from each service instance. Users can then search and analyze these. They can also create alerts that are triggered when certain problem messages appear, prompting them to resolve the issue.
It’s important to keep an eye on transactions so patterns can be monitored and alerts sent if a problem arises. However, this can become increasingly difficult if the service portfolio increases due to a microservice architecture.
The service needs to gather data about individual operations and consolidate it. It should aggregate the metrics of a service, which offers reporting and altering.
There are two models for this: push (which pushes metrics to the metrics service) and pull (which pulls metrics from the metrics service).
Saga is a microservices design option that allows developers to manage eCommerce transactions across multiple microservices using a sequence of local transactions. Each of these is accompanied by an event that will trigger the next stage.
If one transaction fails, this approach can trigger a rollback transaction to compensate for the failure. One of the biggest problems is how to work around transactions that span multiple services. The saga pattern can help with this.
When it comes to container technology, IP addresses are allocated to service instances. This means each time an address changes, consumer services may break and require manual adjustments.
Each service URL has to be remembered by the consumer, which is different from the client-side vs. server-side. A service database must thus be created to store the metadata for each producer service and specification.
A service instance should register to the registry when starting and also de-register if shutting down. There are two types of service discovery: client-side and server-side.
Let’s now take a look at decomposition patterns, including how they can be used to solve problems.
By business capability
This is something a business does that can generate value. The business capability depends on the nature of the enterprise.
For instance, the capabilities of an online tech sales company include sales, marketing, accounting, etc. Each business capability may be thought of as a service (but one that’s business-oriented as opposed to technical).
Breaking an application into small pieces must be done logically. To decompose an application into smaller services, it can be beneficial to use the business capability pattern below.
Decomposing an application using business capabilities can be useful in certain situations, but you may encounter problems with so-called “God classes” that aren’t as easy to decompose. These classes may be common among multiple services.
DDD (domain-driven design) can help with this. It works by using subdomains and bounded context concepts to resolve the issue, starting by breaking the domain model into subdomains. Then, each subdomain has a model. The scope of the model is called the “bounded context”. Microservices are developed based on this.
You need to be able to understand exactly how the business works to identify relevant subdomains. This means developers should analyze a company before acting, including its organizational structure and areas of expertise.
Another decomposition pattern is via transactions. For each business, there are multiple transactions in the system. One of the most important aspects of a distributed transaction is what’s known as a “transaction coordinator”. This pattern consists of two steps:
- The prepare phase. Any participants in the transaction must prepare to commit and notify the coordinator that they’re ready to go ahead and complete the transaction.
- The commit or rollback phase. A commit or rollback command is issued by the transaction coordinator to every participant.
This pattern can be slow compared to the time for operation of a single microservice. This is because coordinating the transaction between two microservices can slow the system down even if they’re on the same network.
During the sidecar pattern (also sometimes known as the sidekick pattern), components of an application are positioned in a separate processor container, which provides isolation and encapsulation. Then, this pattern is used to enable applications to be composed of heterogeneous components and technologies.
The sidecar is attached to a “parent application” and provides assisting features for this. It also shares the same lifecycle as the parent. This is because it’s created and retired alongside it.
The vine pattern, also known as the strangler pattern, resembles a vine wrapping itself around a tree. When it’s applied to web applications, a message goes back and forth for every URI call, and the services can then be broken down into different domains. Unlike some services, these domains are hosted separately.
The two separate services can stay side by side in the same URI space, with a single domain taken into account at any one time. The new refactored app wraps around or “strangles” the original app until you’re able to shut down the monolith application.
The bulkhead pattern is so named because it resembles the sectioned partitions of a ship’s hull. It works by isolating elements of an application into separate sections. If one fails, the others can continue to function.
These partitions are created based on consumer load and availability. It helps to isolate failures, allowing you to keep a service functioning for some customers, even while others are experiencing failure. This can be a useful pattern for businesses that receive large volumes of customers at once.
Getting started with design patterns in microservices
To successfully get started with design patterns in microservices, it comes down to choosing the right option for each scenario. It’s not a one-size-fits-all solution, which is why there are so many patterns to pick from.
It can be beneficial to use software such as OpenLegacy to help you along the way. This solution offers developers for enterprises a true end-to-end “API factory” for creating digital services, which means you get the most out of your core systems without having to make drastic changes.
One major upside is that microservices architecture patterns can deliver a significant reduction in maintenance expenses in the long term. This should be enough to pay for the upfront costs of microservices within a few years.
Quality is also improved with OpenLegacy’s innovative approach, as microservices make for a much cleaner testing process (their simpler build makes it easier to review their code). Add to this the 90% average increase in delivered services per year when using them, and you begin to see their appeal.
Microservices architecture patterns FAQs
What patterns are used in microservices?
There are many microservices architecture patterns, which each offer differing functionality. Commonly used patterns include:
- API gateway
- Database or shared data
- Event sourcing
What are the different types of microservices architecture?
There are many ways to implement different microservices’ architectural styles. Cloud native architectures support legacy systems, for example. They can be used to split a complex system into smaller, manageable sub-systems.
To do this, you can use synchronous or asynchronous technology. Common synchronous technology used includes API REST-based tech or gRPC. Asynchronous technology includes messaging.
How many design patterns are there in microservices?
There are five main microservices architecture design patterns, but these can then be split into many more subgroups.
All of them come under two main types of microservices patterns: client-side and server-side.
We’d love to give you a demo.
Please leave us your details and we'll be in touch shortly