Is Azure Kubernetes Service Dead?

In the last year or so, one question I have been persistently asked by both junior and senior software engineers concerns where they should deploy their containerised services. Not too long ago, this was never even a question - at least for Azure cloud. Azure only provided a managed Kubernetes service. You could, in theory, deploy your container to Azure Web Apps but there is little benefit unless your services dependencies are not available in Azure environment. Anyone containerising applications at the time would only have one platform to deploy to. However, in the ever-evolving landscape of cloud computing, Azure now offers two prominent services for containerised applications: Azure Kubernetes Service (AKS) and Azure Container Apps. With the rising popularity of Container Apps, many engineers (and organisations) are questioning which service to deploy their containerised apps to and whether AKS is becoming obsolete. I may have contributed to the thinking that AKS is becoming obsolete among the engineers I have interacted with partly because most of their use cases fit into Container Apps and not AKS. In this article, I want to dive deep into both services to clarify their distinct value propositions and use cases. I will conclude by answering the question, “Is AKS dead?”.
The Rise of Azure Container Apps
Azure Container Apps, introduced in 2021, represent Microsoft's serverless container offering. It provides a simplified approach to deploying containerised applications without the complexity of managing a Kubernetes cluster. Think of it as a middle ground between Azure Functions and AKS. It is a fully managed serverless container platform that simplifies application deployment and scaling. It eliminates the need for managing infrastructure or Kubernetes clusters, allowing developers to focus on building and deploying applications.
Key Features of Container Apps:
Serverless: Automatic scaling based on HTTP traffic, events, or Kubernetes Event-Driven Autoscaling (KEDA) supported triggers.
Built-in Service Mesh: Features such as Distributed Application Runtime (Dapr) make it easier to implement state management, message brokering, and service-to-service communication.
Built-in service discovery and ingress: each container app in your environment can communicate with others using their DNS name. There’s no need to configure or manage complex networking settings manually.
Fully managed: Azure handles infrastructure provisioning, scaling, and security.
Simplicity: Developers do not need to understand Kubernetes internals to deploy and manage applications.
Pay-per-use pricing model: You can scale your services to zero instances and thereby save during periods of no activity in your applications.
Azure Kubernetes Service: Enterprise-Grade Container Orchestration
AKS is a fully managed Kubernetes service that provides a platform for deploying and managing containerised applications. It simplifies the complexities of Kubernetes by handling infrastructure provisioning, cluster management, and security. AKS is well-suited for a wide range of applications, from simple microservices to complex, stateful workloads.
Despite AKS being a managed service, Microsoft only manages the control plane, underlying infrastructure and security. Engineers, however, are responsible for a number of components such as:
Node Pool Management:
Engineers are responsible for configuring node pools, including VM sizes, auto-scaling, and labels for workload segregation.
They also need to ensure the underlying worker nodes (VMs) have sufficient capacity to run workloads efficiently.
Cluster Networking:
Setting up appropriate network models (Azure CNI or Kubenet) and configuring pod-to-pod and pod-to-service communication.
Implementing ingress controllers (e.g., NGINX, Application Gateway) and setting up DNS.
Managing IP address ranges, subnets, and custom VNET configurations.
Despite the simplicity of Container Apps, AKS remains a powerhouse for complex, enterprise-scale container orchestration. It offers complete control over your Kubernetes environment while abstracting away the control plane management.
Key Features of AKS:
Full Kubernetes API access: AKS gives you the full power of Kubernetes, allowing custom configurations and integrations with third-party tools.
Granular Control: Developers and devops teams can fine-tune deployment strategies, manage scaling policies, and implement advanced networking and security configurations.
Multi-Container Support: Ideal for microservices architectures that require inter-container communication and advanced workflows.
Horizontal scaling: Easily scale applications up or down to meet changing demands.
Support for stateful applications: Supports persistent storage for stateful workloads like databases or message queues.
Key Differences from a deployment perspective
- Development Complexity
# Container Apps
# container-apps.yaml
resources:
containerApps:
# name of the container app
name: my-cool-app
containers:
#container to be deployed to this container app
- image: myregistry.azurecr.io/mycoolapp:v1
env:
#environment variables to be passed to the container at runtime
- name: PORT
value: "80"
# Additional configuration required...
This YAML configuration defines an Azure Container App deployment, specifying a container named "my-cool-app" that will use an image from an Azure Container Registry (myregistry.azurecr.io/mycoolapp:v1). The configuration sets up a basic container deployment with a single environment variable (PORT set to 80), indicating how the container should be initialised and run. While this snippet provides the fundamental structure for deploying a containerised application, it represents only a partial configuration and would require additional settings for a complete container app deployment in an Azure environment. The snippet is only used for illustration.
#AKS
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-cool-app
spec:
#3 pods for this deployment
replicas: 3
selector:
#must match the label defined under template
matchLabels:
app: my-cool-pp
template:
metadata:
labels:
app: my-cool-app
spec:
containers:
#one container specified
- name: my-cool-app
image: myregistry.azurecr.io/mycoolapp:v1
env:
- name: PORT
value: "80"
#required to create a Kubernetes Service, which exposes
#the pods created by the Deployment.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-cool-app-service
spec:
selector:
app: my-cool-app
ports:
- protocol: TCP
port: 80 # Port exposed by the Service
targetPort: 80 # Port the container listens on
type: ClusterIP # Internal access within the cluster
# Additional configuration required...
This Kubernetes configuration defines a deployment and service for an application in Azure Kubernetes Service (AKS). The Deployment resource creates three identical pods (replicas) running a container from a specified Azure Container Registry image, ensuring high availability and consistent application instances. The accompanying Service resource provides internal cluster networking by exposing these pods on port 80, using a ClusterIP type that allows internal communication within the Kubernetes cluster. Together, these resources enable a scalable and accessible containerised application deployment, with built-in load balancing and replication to enhance reliability and performance.
- Scaling Mechanisms
# Container Apps handles scaling automatically
scale:
minReplicas: 0
maxReplicas: 10
rules:
- name: http-rule
http:
metadata:
concurrentRequests: "100"
The configuration defines scaling rules. It establishes a flexible scaling strategy that allows the application to automatically scale from 0 to 10 replicas based on HTTP traffic. The specific rule named "http-rule" sets a scaling threshold of 100 concurrent requests, meaning when the number of simultaneous requests to the application reaches or exceeds 100, the system will automatically increase the number of replicas to handle the load, up to a maximum of 10 instances. Conversely, during periods of low traffic, the application can scale down to zero replicas, which helps optimise resource utilisation and reduce costs by only running instances when needed.
#AKS requires explicit configuration
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-cool-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-cool-app
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
This Kubernetes Horizontal Pod Autoscaler (HPA) configuration automatically scales the "my-cool-app" deployment based on CPU utilisation. It targets the specified deployment and allows the number of pod replicas to dynamically adjust between 1 and 10 instances. The scaling rule is triggered when the average CPU usage across all pods exceeds 50%, meaning if the deployment's pods collectively consume more than half of their allocated CPU resources, Kubernetes will automatically add more replicas to distribute the load, up to a maximum of 10 pods. Conversely, if CPU usage drops, the number of replicas will be reduced, ensuring efficient resource utilisation and maintaining application performance under varying workload conditions.
The code snippets above are intended to provide an idea of the differences when it comes to deploying your containers to either environment and not provide a full working deployment. For AKS, getting to the container deployment stage requires a lot of plumbing to ensure your cluster is production ready and able to handle the workload. Moreover, there will be ongoing admin overheads to maintain the cluster. Given the above, the question of when to use what still remains.
When to Use What?
Feature | Azure Kubernetes Service (AKS) | Azure Container Apps |
Infrastructure Management | Fully managed (control plane and underlying infrastructure only) | Fully managed |
Complexity | Requires Kubernetes expertise | No Kubernetes expertise required |
Scaling | Manual or automatic scaling | Automatic scaling |
Networking | Advanced networking features | Simplified networking |
Use Case | Advanced microservices, stateful apps | Event-driven apps, lightweight workloads, stateless apps |
Cost | Higher operational costs. You pay for the VMs | Cost-efficient for intermittent usage. Scale to zero |
Developer Experience | Requires deep operational expertise | Developer-friendly, minimal overhead |
The Verdict
Is Azure Kubernetes dead? Far from it. While Azure Container Apps offers an excellent solution for many modern application scenarios, AKS continues to serve as the backbone for complex, enterprise-grade container orchestration. The choice between the two depends on your specific requirements, team expertise, and application architecture.
Think of Container Apps as a high-level abstraction perfect for teams wanting to focus purely on application logic, while AKS provides the full power and flexibility of Kubernetes for teams requiring complete control over their container infrastructure.
Rather than viewing them as competitors, consider them complementary services in Azure's container ecosystem. Many organisations successfully use both: Container Apps for simpler, event-driven workloads and AKS for complex, stateful applications requiring fine-grained control.
The future of containerisation in Azure likely involves both services evolving to serve their distinct use cases better, providing developers with the right tool for the right job. To the many engineers I have directed towards Container Apps, I hope this article summarises the various discussions we have had and can be used in future when met with the same question.
Subscribe to my newsletter
Read articles from Ronald Kainda directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ronald Kainda
Ronald Kainda
I am a passionate software engineer driven by a deep fascination with how technology can elegantly solve real-world problems. With a strong belief in the power of innovation, I thrive on creating cutting-edge solutions that make a meaningful impact on people's lives and the world around us. My dedication to excellence and continuous learning enables me to stay at the forefront of technological advancements, always seeking to leverage the latest tools and frameworks to deliver robust and scalable software solutions. I take pride in crafting efficient and user-centric applications that not only meet the needs of today but also anticipate the challenges of tomorrow. Beyond my technical expertise, I have a keen interest in venture capital and startup ecosystems. I am captivated by the dynamic and transformative nature of entrepreneurship. My desire to understand the business side of technology and my analytical mindset fuel my enthusiasm for exploring innovative opportunities and identifying high-potential ventures. As a software engineer, I embrace collaboration, seeing every project as an opportunity to work alongside talented teams and foster an environment of creativity and growth. I am motivated by the prospect of being part of ventures that drive positive change and shape a better future. In essence, my personal brand stands for a software engineer who is not only passionate about the intricacies of coding but also deeply motivated by the potential of technology to create meaningful solutions and the captivating world of venture capital. Unless explicitly stated, the opinions expressed on this blog are mine and do not represent that of any organisation I am associated with.