I am working as a consultant and software architect on different projects that focus on microservices, DevOps, and Kubernetes. Many of my consulting jobs consist of explaining microservices, why they are great, and how to use them efficiently. Microservices is a buzzword that’s been around for a couple of years and almost every developer knows it. However, not many of them know how to implement those.
Therefore, I decided to start this series with the intent of explaining what microservices are and to show how they communicate. After a couple of posts, I received some comments from my readers asking about more advanced topics such as deployments, monitoring, and asynchronous communication. This motivated me to continue this series and it grew to a total of 65 posts over almost two years. All posts use the same demo application. I tried my best not to change too much but the code of earlier posts might be somewhat different than it is now.
Since these blog posts cover such a variety of topics, I will try to categorize them in such a way that they are easily found. I tried to add to add the posts to the relevant category and sometimes they may appear in several categories. Additionally, I will add the full list in chronological order at the end of this page.
Getting Started with Microservices
The following posts explain the theory behind microservices and how to set up your first two .NET 6 (originally .NET Core 3.1) microservices. Both microservices use the mediator pattern and communicate via RabbitMQ.
Continuous Integration and Unit Tests in Azure DevOps
This section is all about automated builds using YAML pipelines in Azure DevOps. Starting with a simple .NET Core pipeline, the pipeline switches to building Docker images and then runs xUnit tests inside the Docker container. Another topic of this section is automated versioning of Docker images and lastly splitting up the pipeline into smaller chunks using templates.
Continuous Deployment with Azure DevOps
After building the Docker images, let’s focus on deploying these images to Kubernetes. Each pull request gets deployed into a new namespace, SSL certificates, and a unique URL is generated and Helm is used to manage the deployment. Using Helm allows overriding configuration values. Furthermore, this section explains how to deploy Azure Functions, and SQL Databases and push NuGet packages to an internal or public feed.
Deploy to Azure Kubernetes Service using Azure DevOps YAML Pipelines
Replace Helm Chart Variables in your CI/CD Pipeline with Tokenizer
Deploy Microservices to multiple Environments using Azure DevOps
Deploy every Pull Request into a dedicated Namespace in Kubernetes
Automatically set Azure Service Bus Queue Connection Strings during the Deployment
Deploy a Docker Container to Azure Functions using an Azure DevOps YAML Pipeline
Publish NuGet Packages to Nuget.org using Azure DevOps Pipelines
Automatically Deploy your Database with Dacpac Packages using Linux and Azure DevOps
Kubernetes with Helm
The following posts explain Microsoft’s Azure Kubernetes Service and why Helm is useful for your deployments. After the basics, more advanced topics like Ingress controller, automated SSL certificate installation, and KEDA are discussed.
Auto-scale in Kubernetes using the Horizontal Pod Autoscaler
Debug Microservices running inside a Kubernetes Cluster with Bridge to Kubernetes
Configure custom URLs to access Microservices running in Kubernetes
Automatically issue SSL Certificates and use SSL Termination in Kubernetes
SSL Configuration in Kubernetes
Kubernetes helps to automatically create Let’s Encrypt SSL certificates and Nginx as Ingress controller allows the creation of unique URLs for each microservice.
Configure custom URLs to access Microservices running in Kubernetes
Automatically issue SSL Certificates and use SSL Termination in Kubernetes
Create NuGet Packages
NuGet packages allow sharing of code between microservices. Additionally, the versioning of these packages gives developers full control over the version they are using and when they want to upgrade to newer versions.
Restore NuGet Packages from a Private Feed when building Docker Containers
Publish NuGet Packages to Nuget.org using Azure DevOps Pipelines
Database Deployments with Azure DevOps
Deploying database changes has always been a pain. Using dacpac packages allows developers or database administrators to easily deploy their changes to an existing database, or create a new one with a pre-defined schema and optionally with test data. Docker also comes in handy when trying to deploy the dacpac package using Linux. Since Azure DevOps doesn’t support deploying dacpacs on Linux, a custom Docker container is used to deploy the dacpac.
Azure Container Registry and Azure Service Bus
The Azure Container Registry is a private repository in Azure. Since it is private, Kubernetes needs an Image pull secret to be able to download images from there. Additionally, this section shows how to replace RabbitMQ with Azure Service Bus Queues and how to replace the .NET background process with Azure Functions to process messages in these queues.
Azure Functions
Azure Functions can be used to process messages from queues and can be deployed as a Docker container or as .NET 6 application.
Infrastructure as Code, Monitoring, and Logging
Infrastructure as Code (IaC) allows developers to define the infrastructure and all its dependencies as code. These configurations are often stored in YAML files and have the advantage that they can be checked into version control and also can be deployed quickly using Azure DevOps. Another aspect of operating a Kubernetes infrastructure is logging and monitoring with tools such as Loki or Prometheus.
Use Infrastructure as Code to deploy your Infrastructure with Azure DevOps
Collect and Query your Kubernetes Cluster Logs with Grafana Loki
Service Mesh
Big Kubernetes clusters can be hard to manage. Service Mesh like Istio helps administrators to manage Kubernetes clusters with topics such as SSL connections, monitoring, or tracing. All that can be achieved without any changes to the existing applications. Isitio also comes with a bunch of add-ons such as Grafana, Kiali, and Jaeger to help administrate the cluster.
KEDA - Kubernetes Event-Driven Autoscaling
Applications have become more and more complicated over the years and often rely on external dependencies these days. These dependencies could be an Azure Service Bus Queue or a database. KEDA allows applications to scale according to these dependencies.
AAD Authentication
Using Azure Active Directory authentication allows users to authenticate their applications using Azure identities. The advantage of this approach is that no passwords need to be stored and managed for the connection.
All Posts in Chronological Order
The following list consists of all blog posts in chronological order:
Run xUnit Tests inside Docker during an Azure DevOps CI Build
Restore NuGet Packages from a Private Feed when building Docker Containers
Publish NuGet Packages to Nuget.org using Azure DevOps Pipelines
Deploy to Azure Kubernetes Service using Azure DevOps YAML Pipelines
Auto-scale in Kubernetes using the Horizontal Pod Autoscaler
Replace Helm Chart Variables in your CI/CD Pipeline with Tokenizer
Automatically Deploy your Database with Dacpac Packages using Linux and Azure DevOps
Deploy a Docker Container to Azure Functions using an Azure DevOps YAML Pipeline
Configure custom URLs to access Microservices running in Kubernetes
Automatically issue SSL Certificates and use SSL Termination in Kubernetes
Deploy Microservices to multiple Environments using Azure DevOps
Deploy every Pull Request into a dedicated Namespace in Kubernetes
Use Infrastructure as Code to deploy your Infrastructure with Azure DevOps
Debug Microservices running inside a Kubernetes Cluster with Bridge to Kubernetes
Collect and Query your Kubernetes Cluster Logs with Grafana Loki
Automatically set Azure Service Bus Queue Connection Strings during the Deployment
Use AAD Authentication for Applications running in AKS to access Azure SQL Databases
Comments powered by Disqus.