From Logical to Physical

So far, when we’ve spoken about microservices, we’ve spoken about them in a logical sense, rather than a physical sense. We could talk about how our Invoice microservice communicates with the Order microservice, without actually looking at the physical topology of how these services are deployed. A logical view of an architecture typically abstracts away underling physical deployment concerns

Multiple Instances

When we think about the deployment topology of these two microservices, it’s not as simple as one thing talking to another. To start with, it seems quite likely that we’ll have more than one instance of each service. Having multiple instances of a service allows you to handle more load, and can also improve the robustness of your system as you can more easily tolerate the failure of a single instance. So, we’ve potentially got one or more instances of Invoice talking to one or more instances of Order. Exactly how the communication between these instances is handled will depend on the nature of the communication mechanism, but if we assume that in this situation we’re using some form of HTTP-based API, a load balancer would be enough to handle routing of requests to different instances

The number of instances you’ll want will depend on the nature of your application – you’ll need to assess the required redundancy, expected load levels and the like to come up with a workable number. You may also need to take into account where these instances will run. If you are having multiple instances of a service for robustness reasons, you’d likely want to make sure that these instances aren’t all on the same underlying hardware. Taken further, this might require that you have different instances distributed not only across multiple machines, but also different data centers, to give protection against a whole data centre being made unavailable.

This might seem overly cautious – what’s the chances of an entire data center being unavailable? Well, I can’t answer that question for every situation, but at least when dealing with the main cloud providers, this is absolutely something you have to take account of. When it comes to something like a managed virtual machine, neither AWS, Azure nor Google will give you an SLA for a single machine, nor do they give you an SLA for a single availability zone (which is the closest equivalent to a data center for these providers). In practice, this means that any solution you deploy should be distributed across multiple availability zones.

The Database

Taking this further, there is another major component that we’ve ignored up until this point – the database. As I’ve already discussed, we want a microservice to hide it’s internal state management, so any database used by a microservice for managing it’s state is considered to be hidden inside the microservice. This leads to the oft-stated mantra of “don’t share databases”, the case for which I hope has already been made sufficiently by now.

But how does this work when we consider the fact that I have multiple microservice instances? Should each microservice instance have its own database? In a word, no. In most cases, if I go to any instance of my Order service, I want to be able to get information about the same order. So, we need some degree of shared state between different instances of the same logical service.

But doesn’t this violate our “don’t share the database” rule? Not really. One of our major concerns is that when sharing a database across multiple different microservices, that the logic associated with accessing and manipulating that state is now spread across different microservices. But here, the data is being shared by different instances of the same microservice. The logic for accessing and manipulating state is still held within a single logical microservice.

DATABASE DEPLOYMENT AND SCALING

As with our microservices, we’ve mostly talked about a database in a logical sense so far. we’ve ignored any concerns about the redundancy or scaling needs of the underlying database. We’ve also sidestepped an important concept of most databases you’ll find yourself using – that you can manage multiple logically isolated databases on the same database infrastructure.

The exact terms used here vary between different database vendors, but broadly speaking a physical database deployment might be hosted on multiple machines, for a host of reasons. A common example would be to split load for read and writes between a primary and one or more nodes that are designated for read-only purposes (these nodes are typically refereed to as read replicas). If we were implementing this idea for our Order service

All read-only traffic goes to one of the read replica nodes, and you can further scale read traffic by adding additional read-nodes. Due to the way that relational databases work it’s more difficult to scale writes by adding additional machines (typically sharding models are required, which adds additional complexity) so moving read-only traffic to these read replicas can often free up more capacity on the write node to allow for more scaling.

Added to this complex picture is the fact that the same database infrastructure can support multiple logically isolated databases. So, the database for Invoice and Order might both be served from the same underlying database engine and hardware. This can have significant benefits as it allows you to pool hardware to serve multiple microservices, can reduce licencing costs, and can also help reduce the work around management of the database itself.

The important thing to realise here is that although these two databases might be run from the same hardware and database engine, they are still logically isolated databases. They cannot interfere with each other (unless you allow this). The one major thing to consider is the fact that if this shared database infrastructure fails, that you might impact multiple microservices, which could have catastrophic impact.

In my experience, organizations that manage their own infrastructure and run in an “on-prem” fashion tend to be much more likely to have multiple different databases hosted from shared database infrastructure, for the cost reasons I outlined before. Provisioning and managing hardware is painful (and historically at least databases are less likely to run on virtualized infrastructure), so you want less of it.

On the other hand, teams that run on public cloud providers are much more likely to provision dedicated database infrastructure on a per-microservice basis. The costs of provisioning and managing this infrastructure is much lower. AWS’s Relational Database Service (RDS) for example can automatically handle concerns like backups, upgrades, multi-availability zone fail-over, and similar products are available from the other public cloud providers. This makes it much more cost effective to have more isolated infrastructure for your microservice, giving each microservice owner more control rather than having to rely on a shared service.

Environments

When you deploy your software, it runs in an environment. Each environment will typically serve different purposes, and the exact number of environments you might have will vary greatly based on how you develop software and how your software is deployed to your end user. Some environments will have production data, some won’t. Some environments may have all services in them, others might just have a small number, with any non-present services replaced with fake ones for the purposes of testing.

Typically, we think of our software as moving through a number of pre-production environments, with each one serving some purpose to allow the software to be developed and its readiness for production to be tested – we explored this earlier in “Tradeoffs and Environments”. From a developer laptop, to a continuous integration server, to an integrated test environment and beyond – the exact nature and number of your environments will depend on a host of factors but is driven primarily by how you choose to develop software.We see a pipeline for MusicCorp’s Catalog microservice. The microservice moves through different environments, before it finally gets into a production environment where our users will get to use the new software.

Principles Of Microservice Deployment

With so many options facing you for how to deploy your microservices, I think it’s important that I establish some core principles in this area. A solid understanding of these principles will stand you in good stead no matter what choices you end up making. We’ll look at each principle in detail shortly, but just to get us started, here are the core ideas we’ll be covering.Isolated Execution

Run microservice instances in an isolated fashion where they have their own computing resources, and their execution cannot impact other microservice instances running nearby.Focus On Automation

As the number of microservices increases, automation becomes increasingly important. Focus on choosing technology which allows for a high degree of automation, and adopt automation as a core part of your culture.Infrastructure As Code

Represent the configuration for your infrastructure to ease automation and promote information sharing. Store this code in source control to allow for environments to be recreated.Zero-downtime Deployment

Take independent deployability further, and ensure that deploying a new version of a microservice can be done without any downtime to users of your service (be it humans or other microservices).Desired State Management

Use a platform that maintains your microservice in a defined state, launching new instances if required in the event of outage or traffic increases. Consider GitOps to use this in conjunction with Infrastructure As Code to drive even more of your operations tasks from code.

Isolated Execution

You may be tempted, especially early on in your microservices journey, to just put all of your microservice instances on a single machine (which could be a single physical machine, or single VM), as shown in.

Leave a Reply

Your email address will not be published. Required fields are marked *