In this blog, we’ll start to talk about the migration journey and innovation journey before diving into different architecture types of applications, from monolith then look at the principles of microservices as well as the basics of Containers and Kubernetes through the architectural perspectives. After getting a basic understanding of all these things, we can have a closer look at a few options of using DevOps tools to help enterprises boost the productivity, in particular, using Jenkins X as their main CI/CD tools while working with Kubernetes although we got a couple of other options as well.
From monolithic to the microservices architecture
Back to the day, monolithic architecture has been popular over the last decades before the cloud became a trend. Monolithic architecture based applications tend to layer the structure into three tiers or N-tier applications, which basically like the following :
- A presentation layer also called the front-end layer, such as Single Page Applications (SPAs), traditional and mobile web applications, hybrid applications as well as native mobile applications, etc.
- A business logic layer also recognized as middleware, The application might also expose an API for third parties to consume. It should also be able to integrate its microservices or external applications asynchronously, so that approach will help the resiliency of the microservices in the case of partial failures.
- A data access layer also known as the back-end layer. Basically, this is the layer that is responsible for bringing data to other layers. Data comes with different forms and sharps, the choice of data source that you’re using will subject to your data model.
The most representative architecture likes the following :
Caption : Monolith architecture
When it comes to cloud age, more and more microservices architecture is becoming more and more popular than monolithic architecture when speaking of scalability, availability and more. The process of transforming therefore a monolithic application into microservices turns out is a pre-show of application modernization. The reason that I am saying that is because cloud journey is actually a iterative processes, different approaches from lift and shift, optimization, and modernization will subject to the requirements, budget and more complex factors which we’ll take about briefly at the following section.
Cloud migration journey versus innovation journey
If we look at the cloud migration journey ( as the following diagram ), this part comes the crossroad with the innovation journey. As I said before to migrate a monolith application which can be done straightforward by lift-and-shift it in VM or containers by using Docker Compose or deploying multi-container Pod in Kubernetes. Even though barely technical problems in there and will help your application to the cloud.
Caption : Cloud Migration Journey
However speaking of agility, scalability and availability, it is definitely not a win-win. To make a difference here, we have to transform monolith application to microservices, however, instead of doing a big chunk refactor or rebuild them from scratch, we do have a possibility to do this part gradually and incrementally. Here is also where containers come up at their best advantage.
Before talking about why containers-based solution is a great approaching to implement microservices architecture in compare to serverless frameworks. Let’s go back a little bit by looking at the principles of microservices.
Principle of microservices
If we look at the microservices, we can see the main idea of microservices is to break down your business logic and data access layers into independent modules like the following diagram which you can compare with the previous diagram described the monolith architecture. This is also known as ‘ One process, one microservice’ principle. Each of microservices are loosely coupled, it then combines together with other microservices to construct a complete business application.
Caption : microservices architecture
As you can see each service has its own data store and each of them is target to address a specific business requirement. There are 3 main reasons to apply the principle of microservice architecture as below :
- The fact to use independent modules is to help each module can scale at their own space and at ease.
- The fact of enabling the scalability also helps the reinforcement of availability, as each module can be deployed independently, it help reduce downtime at some point. In case that one of microservice is offline, we can easily to restore that module, however other functionalities wouldn’t affect each other as they are different independent microservice.
- It also enable to use of different technologies within the same application across against different modules of this application. Personally like the use example of my project in the past because I program both in node.js and in C# to develop RESTful API and it can be exposed behind a common API Gateway so that front-end application within the same system or external application can consume this API. This enables us to use different technical resources within the same organization to build a global API integration platform and to boost the business performance under the API economy.
As we can deploy the microservices independently, it also means that we can maintain them independently, at some point, it increases also the difficulties to monitor, managing networking, security and more going forward where service mesh come up. We’ll get there in upcoming chapters.
Pathway to build microservices
To build microservice architecture for your application, as I said we can refactor and rebuild it gradually, after breaking apart the modules, we’ll need a separate versioning control tool such as git or subversion to each microservice, it is also known as ‘one codebase = 1 repository’.
In parallel, the codes should be separated and isolated from the configuration file and the dependencies to facilitate the maintenance and administration task. However, it can still contain a certain number of variables which can be rewrite in the runtime by reading configuration files or being calculate in the code.
Besides each microservice should be strictly stateless and it can be autonomous in a ‘ self-contained’ way, each microservice has its own data persistent store, and all the resources in the microservice share the same lifecycle.
As I said, In general the front site does not store any data, clearly data is always in the backend, so a middleware such as API gateway or a really RESTful service is always implemented to help exchange information.
There’s an official guide to help user designing, building, and operating microservices on Azure at Microsoft’s documentation: which you can refer to as the following: https://docs.microsoft.com/en-us/azure/architecture/microservices/index
There’s also a great example which is my favorite so far about how to implement microservice reference architecture by referencing a real-life example which is known as Microsoft eShop Container which is a microservice architecture based application using .NET Core and Docker. It contains different types of microservices in the project :
Caption: Different types of microservices in eShop on containers example
This example was published on GitHub with the below link: https://github.com/dotnet-architecture/eShopOnContainers.
If you’re a .Net Engineer or someone who’s the fan of Microsoft solutions, you would definitely benefit from one excellent book .NET Microservices Architecture for Containerized .NET Applications, it explains in detail how to develop the microservices architecture style, it was highly recommended by myself and some of Microsoft tech gurus, here is the link to download it: https://aka.ms/microservicesebook
Pathway to containers
As I mentioned before containers can be excellent option to help build microservices apart from serverless functions such as Azure functions. One of the reasons that I am saying that is containers are much faster and lightweight than when the same application deployed in VM, and it provides a much more structured way to help you integrated it with your CI/CD pipeline, in general, you don’t need to change too much in terms of code and configuration along the way.
Then questions come up here: what are the containers? Generally speaking, containerization is an approach manage an application or service, combine its dependencies as well as its configuration which often enough to be abstracted as manifest files, packaged those things together as a container image. The containerized application can be tested as a unit regardless of its hosted operating system. There is so many container runtimes in the tech world such as Docker, CRI-O, Containerd etc. Docker is turn out to be the most popular and common enterprise container platform. You can know more about docker from docker’s official documentation: https://docs.docker.com/
Kubernetes can use different container runtime, Docker is no doubts as to the first choice of Azure Kubernetes runtime. We’ll take a closer look at Kubernetes in the section coming up.
Big picture of Container Registries
As I mentioned, containerization solution such as Docker, it helps encapsulating and packages it and its dependencies into a container image. An image you can see it as a static representation of the application or service itself and its configuration as well as its dependencies. As long as you get your container image, and for any hosted operating system as long as it is running docker or other container runtimes, you can run your application in there, a container image is instantiated to create a container, which could be running on Docker host. It is equally when you’re running the same image on-premises or in the cloud.
A container registry, technically is somewhere stores your container image. Look around the market, you can find a couple of options as public container registry such as Docker Hub where Docker maintains as its public registry. Other vendors provide registries such as Nginx. Microsoft Azure provides Azure Container Registry as an official cloud-based container registry. As an alternative option, for some legacy application it can have a private registry on-premises for all the Docker images of its application.
Caption : Container Registry
To store your image in a container registry help you versioned your container images and keep track image its dependencies as well, all those factors help to provide a consistent deployment unit when it integrated with DevOps process.
Kubernetes architecture
Imagine that you can manage one or even a couple of containers whatever on-premises or in the cloud easily for dev/testing purpose, however, when it will be in production, especially for some mission-critical business solutions such as e-commerce site or financial transaction systems, you might have to manage over 100 or even thousands of containers, networking, deployments, configuration, etc, everything becomes challenging. Here is where Kubernetes come up. Kubernetes is a portable, highly extensible, open-source orchestration which facilities managing containerized workloads and services, you can see Kubernetes as a big orchestrator engine to help your containers archive ‘ designed status’ and your best friend to manage all these containers across different worker nodes.
To know what Kubernetes can do and cannot do, please ref to the official documentation: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
General Kubernetes architecture
Kubernetes follows master and slave architecture. Kubernetes is made up of several core components as shown in the following diagram :
Caption : Kubernetes architecture
Basically, the Master of Kubernetes contains the following components :
- kube-apiserver which you can see as a communication manager between different tools and Kubernetes cluster.
- etcd is a distributed reliable key-value store that is simple, secure and fast. The etcd data store stores information regarding the cluster such as the nodes, pods, configs, secrets, accounts, roles, bindings and others ( the state of the cluster and information about the cluster itself ).
- kube-scheduler is responsible for scheduling pods onto nodes, you can see it as a postal official, it send the pods information to each node and when it arrives the respective node, the kubelet agent on that node will actually implement that node.
- kube-controller-manager is responsible for running Kubernetes controllers, for example, the node controller that responds to changes in a node’s status.
Recall that every node in a Kubernetes cluster has the following components:
- kubelet is a node agent that accepts pod specifications sent from API server or locally ( static pod ) and actually provision that pod on the respective node.
- Container runtime is the runtime that actually help running containers within the pods such as Docker, CRI-O, Containerd etc.
- kube-proxy which implements network rules and traffic forwarding to implement part of the Kubernetes service concept in Kubernetes.
Comparaison Kubernetes architecture in public cloud
Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of bare metal servers. The effort required to set up a cluster varies from running a single command to crafting your own customized cluster.
Running Kubernetes on Microsoft Azure
There are a couple of options to help users to run Kubernetes on Azure, the easiest way I’d suggest is to start with Azure Kubernetes Services which is a managed Kubernetes services.
It is totally fresh experience if you’re getting started with Azure Kubernetes Service (AKS), it can help you deploy a Kubernetes Cluster within a few minutes and itself is totally free, you can only pay for the node that you’re going to use. It brings a more manageable way when you’re working with Kubernetes. The general architecture of Azure Kubernetes is like the following :
Caption: Azure Kubernetes Service architecture
As you can see, in Azure Kubernetes Service the master is managed by Microsoft Azure an as part of control plane. AKS cluster can be created via Azure portal, by using Azure CLI or AZ PowerShell, which makes a lot of sense when speaking of provision AKS throughout the DevOps process is using IaC ( Infrastructure as Code ) templating driven deployment options such as Resource Manager templates ( ARM ) or Terraform. After deploying an AKS cluster, the Kubernetes cluster was configured for users by Azure. Users could also take advantage of some additional features such as CNI, Virtual Kubelet, KEDA, and Monitoring feature an also be configured as an extension throughout the deployment process. Windows Server containers support is currently in preview in AKS. All these features will help you manage Kubernetes with ease.
Caption: manage Kubernetes with ease
Then I have to mention AKS-Engine which actually is the core of the Azure Kubernetes Service, it is totally open-source on GitHub and for the public tech community to contribute to, you find it here: https://github.com/Azure/aks-engine
Aks-engine can help users customize some additional deployments feature rather than Azure Kubernetes Service officially supports. Some excellent contributions to these new features can be covered in the future AKS roadmap.
The following are some useful links which I believe will be very helpful for you :
- Find more about AKS Roadmap at http://aka.ms/aks/roadmap
- Find more about AKS Release Notes at https://aka.ms/aks/releasenotes
- Find more about AKS Preview Features at https://aka.ms/aks/previewfeatures
Summary
The main idea of microservices is to break down your business logic and data access layers into independent modules which are also known as ‘ One process, one microservice’ principle. Each microservice should be strictly stateless and it can be autonomous in a ‘ self-contained’ way, each microservice has its own data persistent store, and all the resources in the microservices share the same lifecycle. Containerization is an approach to manage an application or service, combine its dependencies as well as its configuration which often enough to be abstracted as manifest files, packaged those things together as a container image. Next blog we’re going to talk about more about Serverless Kubernetes, let’s stay tuned!
3 responses to “Under the hood of Kubernetes and microservices”
[…] The notion of cloud-native contains the design and architecture of applications that are specifically created to run in the cloud and hybrid environments. Cloud-native technologies such as Kubernetes and serverless computing are enabling developers to build and deploy applications faster and more effectively. At the same time, business requirements demand that cloud-native applications be highly scalable, resilient, and adaptable to changing needs. This is where microservices have a natural advantage. […]
[…] applications are becoming more modular with better scalability, and flexibility with the power of microservices architecture from the end-to-end application lifecycle including design, development, and deployment of modern […]
[…] applications are becoming more modular with better scalability, and flexibility with the power of microservices architecture from the end-to-end application lifecycle including design, development, and deployment of modern […]