Microservices are the most scalable way of developing software that can be run in many ways, each with different trade-offs and cost structures. Here are five ways of deploying microservices, ranging from simple to complex:
1. Single machine, multiple processes.
At the most basic level, a microservice application can be run as multiple processes on a single machine. Each service listens to a different port and communicates over a loopback interface.
This straightforward approach has several obvious advantages:
Lightweight: because it is just processes running on a server, there is no overhead.
Convenience: it’s a great way to experience microservices without the learning curve that other tools have.
Simple troubleshooting: because everything is in the same place, finding a problem or reverting to a working configuration in case of trouble is very simple.
Fixed billing: we know how much we’ll have to pay each month.
Such approach works best for small applications with only a few microservices. Beyond that, it falls short due to zero scalability, a single point of failure (the server), fragile deployment, and the inability to set resource limitations. For this option, continuous integration (CI) will follow the same pattern: build and test the artifact in the CI pipeline, then deploy with continuous deployment.
2. Multiple machines, multiple processes.
The obvious next step is to add more servers and distribute the load, resulting in increased scalability and availability. When an application outgrows a server’s capacity, we can scale up (upgrade the server) or scale sideways (add more servers). In the case of microservices, horizontal scaling into two or more machines makes more sense because we benefit from increased availability. Furthermore, once
we’ve established a distributed setup, we can always scale up by upgrading servers.
However, horizontal scaling is not without flaws. Going beyond one machine introduces a few critical points that make troubleshooting much more difficult, and typical problems associated with the microservice architecture emerge.
How do we correlate log files distributed among many servers?
How do we collect sensible metrics?
How do we handle upgrades and downtime?
How do we handle spikes and drops in traffic?
These are all problems inherent to distributed computing and are something that you will experience as soon as more than one machine is involved.
This option is ideal if you have a few spare machines and want to increase the availability of your application. You’ll be fine as long as you keep things simple and use services that are more or less uniform (same language, similar frameworks). When you reach a certain level of complexity, you’ll need containers to provide more flexibility.
3. Containers are the first step towards Kubernetes.
Containers are packages that contain everything required for a program to run. A container image is a self-contained unit that can run on any server without the need for any dependencies or tools to be installed first. Containers provide just enough virtualization to isolate software.
We get the following advantages from them:
Isolation: contained processes are isolated from one another and from the operating system. Because each container has its own filesystem, dependency conflicts are impossible.
Concurrency: Multiple instances of the same container image can run concurrently without conflict.
Less overhead: Because containers do not need to boot an entire operating system, they are much lighter than VMs.
No-install deployments: setting up a container is as simple as downloading and running the image. There is no need to install anything.
Resource control: we set up CPU and memory limits on containers, so they don’t destabilize the server.
Autoscaling, easy deployment and no need to maintain servers are among the benefits of a managed container service.
However, before you dive in, you should be aware of the following disadvantages:
Vendor commitment. It is always difficult to transition away from a managed service because the cloud vendor provides and controls the majority of the infrastructure.
Limited resources: managed services impose unavoidable CPU and memory limits.
Less control: We don’t have as much control as we do with other options.
This option is appropriate for small to medium-sized microservice applications. If you’re comfortable with your vendor, a managed container service is more convenient because it handles many of the details for you. However, for large-scale deployments, this option will fall short.
4. Orchestrator.
Orchestrators such as Kubernetes or Nomad are complete platforms designed to run thousands of containers simultaneously. The most well-known orchestrator is Kubernetes, a Google-created open-source project maintained by the Cloud Native Computing Foundation.
Orchestrators provide, in addition to container management, extensive network features like routing, security, load balancing, and centralized logs — everything you may need to run a microservice application.
With Kubernetes, we step away from custom deployment scripts. Instead, we codify the desired state with a manifest and let the cluster take care of the rest.
Kubernetes is supported by all cloud providers and is the de facto platform for microservice deployment. As such, you might think this is the absolute best way to run microservices. For many companies this is the best way to run microservices, but they’re also a few things to keep in mind:
Complexity: orchestrators are known for their steep learning curve.
Administrative burden: maintaining a Kubernetes installation requires significant expertise.
Skillset: Kubernetes development requires a specialized skillset. Transitioning into Kubernetes can be slow and decrease productivity until the team is familiar with the tools.
Kubernetes is the most popular option for companies that use containers extensively. The most difficult challenge for most businesses when migrating to Kubernetes is finding skilled engineers.
5. Serverless.
Serverless allows us to forget about processes, containers, and servers and run code directly in the cloud. Serverless offerings like AWS Lambda and Google Cloud Functions handle all the infrastructure details required for scalable and highly-available services, leaving us free to focus on coding.
It’s a completely different paradigm with distinct advantages and disadvantages. On the bright side, we get:
Simplicity of use: we can deploy functions without compiling or building container images, which is great for testing and prototyping.
Scalability: using the Cloud provides virtually infinite scalability.
Pay per use: there is no charge if there is no demand.
The disadvantages are as follows:
Vendor lock-in: just like with managed containers, you’re investing in the provider’s ecosystem.
Cold starts: functions that are used infrequently may take a long time to start. This occurs as a result of the cloud provider spinning down the resources associated with unused functions.
Restrictions on resources: each function has a memory and time limit.
Runtime limitations: only a few languages and frameworks are supported.
Possibility of unexpected expenses: with a usage-based cost any spike can result in impressive bills.
Serverless provides a hands-off scalability solution. It doesn’t give you as much control as Kubernetes, but it’s easier to work with because serverless doesn’t require specialized skills. Serverless is an excellent option for small businesses that are rapidly expanding, as long as they are willing to accept its drawbacks and limitations.
Wrapping up
Many factors influence the best way to run a microservice application. A single server using containers is an excellent starting point for experimenting with or testing prototypes.
If the application is mature and spans multiple services, you’ll need something more robust, like managed containers or serverless, and possibly Kubernetes later on as your application grows.
Nothing prevents you from mixing and matching different options. In fact, most businesses use a combination of bare-metal servers, virtual machines, and Kubernetes. A combination of solutions, such as running the core services on Kubernetes, a few legacy services in a VM, and reserving serverless for a few strategic functions, may be the best way to maximize cloud utilization.
Five options for Deploying Microservices
Microservices are the most scalable way of developing software that can be run in many ways, each with different trade-offs and cost structures. Here are five ways of deploying microservices, ranging from simple to complex:
1. Single machine, multiple processes.
At the most basic level, a microservice application can be run as multiple processes on a single machine. Each service listens to a different port and communicates over a loopback interface.
This straightforward approach has several obvious advantages:
Such approach works best for small applications with only a few microservices. Beyond that, it falls short due to zero scalability, a single point of failure (the server), fragile deployment, and the inability to set resource limitations. For this option, continuous integration (CI) will follow the same pattern: build and test the artifact in the CI pipeline, then deploy with continuous deployment.
2. Multiple machines, multiple processes.
The obvious next step is to add more servers and distribute the load, resulting in increased scalability and availability. When an application outgrows a server’s capacity, we can scale up (upgrade the server) or scale sideways (add more servers). In the case of microservices, horizontal scaling into two or more machines makes more sense because we benefit from increased availability. Furthermore, once
we’ve established a distributed setup, we can always scale up by upgrading servers.
However, horizontal scaling is not without flaws. Going beyond one machine introduces a few critical points that make troubleshooting much more difficult, and typical problems associated with the microservice architecture emerge.
These are all problems inherent to distributed computing and are something that you will experience as soon as more than one machine is involved.
This option is ideal if you have a few spare machines and want to increase the availability of your application. You’ll be fine as long as you keep things simple and use services that are more or less uniform (same language, similar frameworks). When you reach a certain level of complexity, you’ll need containers to provide more flexibility.
3. Containers are the first step towards Kubernetes.
Containers are packages that contain everything required for a program to run. A container image is a self-contained unit that can run on any server without the need for any dependencies or tools to be installed first. Containers provide just enough virtualization to isolate software.
We get the following advantages from them:
Autoscaling, easy deployment and no need to maintain servers are among the benefits of a managed container service.
However, before you dive in, you should be aware of the following disadvantages:
This option is appropriate for small to medium-sized microservice applications. If you’re comfortable with your vendor, a managed container service is more convenient because it handles many of the details for you. However, for large-scale deployments, this option will fall short.
4. Orchestrator.
Orchestrators such as Kubernetes or Nomad are complete platforms designed to run thousands of containers simultaneously. The most well-known orchestrator is Kubernetes, a Google-created open-source project maintained by the Cloud Native Computing Foundation.
Orchestrators provide, in addition to container management, extensive network features like routing, security, load balancing, and centralized logs — everything you may need to run a microservice application.
With Kubernetes, we step away from custom deployment scripts. Instead, we codify the desired state with a manifest and let the cluster take care of the rest.
Kubernetes is supported by all cloud providers and is the de facto platform for microservice deployment. As such, you might think this is the absolute best way to run microservices. For many companies this is the best way to run microservices, but they’re also a few things to keep in mind:
Kubernetes is the most popular option for companies that use containers extensively. The most difficult challenge for most businesses when migrating to Kubernetes is finding skilled engineers.
5. Serverless.
Serverless allows us to forget about processes, containers, and servers and run code directly in the cloud. Serverless offerings like AWS Lambda and Google Cloud Functions handle all the infrastructure details required for scalable and highly-available services, leaving us free to focus on coding.
It’s a completely different paradigm with distinct advantages and disadvantages. On the bright side, we get:
The disadvantages are as follows:
Serverless provides a hands-off scalability solution. It doesn’t give you as much control as Kubernetes, but it’s easier to work with because serverless doesn’t require specialized skills. Serverless is an excellent option for small businesses that are rapidly expanding, as long as they are willing to accept its drawbacks and limitations.
Wrapping up
Many factors influence the best way to run a microservice application. A single server using containers is an excellent starting point for experimenting with or testing prototypes.
If the application is mature and spans multiple services, you’ll need something more robust, like managed containers or serverless, and possibly Kubernetes later on as your application grows.
Nothing prevents you from mixing and matching different options. In fact, most businesses use a combination of bare-metal servers, virtual machines, and Kubernetes. A combination of solutions, such as running the core services on Kubernetes, a few legacy services in a VM, and reserving serverless for a few strategic functions, may be the best way to maximize cloud utilization.
Recent Posts
Categories