Azure Kubernetes Services (AKS) is a service to run a managed Kubernetes cluster on Azure.

In this article we will see how to configure AKS to support an ingress controller, create and ingress, and also how to create a few other resources to simulate a common deployment configuration for a microservice architecture.

Configure the AKS resource in Terraform

To be able to route the traffic towards a cluster on Azure an Application Gateway Ingress Controller (AGIC) is created to provide multiple Kubernetes services. As per the documentation on Azure this means that you get included in the managed service a “reverse proxy, configurable traffic routing, and TLS termination”.

In Terraform, on the Terraform azurerm_kubernetes_cluster resource what enables this AKS feature creating for us the AGIC is the http_application_routing_enabled property.

In this example a Kubernetes cluster managed in Terraofrm enables this property and prepares the cluster for the ingress rules creation.

cat <<EOF > main.tf

resource "azurerm_kubernetes_cluster" "main" {
  name                = "aksMain"
  location            = var.resource_group_location
  resource_group_name = var.resource_group_name
  dns_prefix          = "aksMain"

  network_profile {
    network_plugin    = "kubenet"
    load_balancer_sku = "standard"
    outbound_type     = "loadBalancer"
  }

  default_node_pool {
    name       = "default"
    node_count = 2
    vm_size    = "Standard_D2_v2"
  }

  identity {
    type = "SystemAssigned"
  }

  # Form Azure: https://learn.microsoft.com/en-us/azure/developer/terraform/create-k8s-cluster-with-aks-applicationgateway-ingress
  http_application_routing_enabled = true
}

EOF

Versions in used:

  • Terraform: v1.3
  • AzureRM terraform provider: v3.28
  • Kubernetes: v1.23

Create and expose two sample deployments

Before moving forward in the creation of ingress rules, it would be better to have services to expose - so we will see proper responses! Let’s create a couple of services simulating a frontend + backend configuration.

To deploy and expose services in Kubernetes the Deployment resource is used. This allows to manage one or more pods, replicated, in a convenient way. Then, the Service resource is used to expose the pods handling for us the internal requests routing.

Indeed, the service resource is quite important in a Kubernetes cluster to properly manage traffic (internally or from outside) to a set of pods which potentially change IPs frequently due to eviction, relocation, or scale out.

Let’s create a frontend deployment and its ClusterIP service.

cat <<EOF > deploy.frontend.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend
          image: nginx:1.22
          ports:
            - containerPort: 80

EOF
kubectl apply -f deploy.frontend.yaml

The service exposing it:

cat <<EOF > svc.frontend.yaml

apiVersion: v1
kind: Service
metadata:
  namespace: default
  name: frontend-svc
spec:
  type: ClusterIP
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

EOF
kubectl apply -f svc.frontend.yaml

Verify service is working:

# list services
kubectl get svc

# get the Nginx page from the pod passing from internal DNS, the service, and getting the index page from the pod
kubectl run -it --rm --image=curlimages/curl curl-debug -- curl frontend-svc.default.svc.cluster.local

Let’s do the same for backend to have a sample service for that as well, you will see why it is useful in a while.

cat <<EOF > deploy.backend.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: backend
  labels:
    app: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: backend
          image: nginx:1.23
          ports:
            - containerPort: 80

EOF
kubectl apply -f deploy.backend.yaml

The service exposing it:

cat <<EOF > svc.backend.yaml

apiVersion: v1
kind: Service
metadata:
  namespace: default
  name: backend-svc
spec:
  type: ClusterIP
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

EOF
kubectl apply -f svc.backend.yaml

Verify service is working:

# list services
kubectl get svc

# get the Nginx page from the pod passing from internal DNS, the service, and getting the index page from the pod
kubectl run -it --rm --image=curlimages/curl curl-debug -- curl backend-svc.default.svc.cluster.local

Creating Ingress rules

Ingress is implemented on Azure using a Nginx ingress controller.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cluster-ingress
  annotations:
    # this matches the ingress controller label, enabled by the "http_application_routing_enabled" feature of AKS
    kubernetes.io/ingress.class: addon-http-application-routing
spec:
  # the rules below describe how to route the traffic
  rules:
    - http:
        paths:
          - path: /api/v1
            pathType: Prefix
            backend:
              service:
                name: backend-svc
                port:
                  number: 80

          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-svc
                port:
                  number: 80

With the above example the requests that arrive to the reverse proxy implementing the ingress, are routed according the rules. What they specify is to convey requests which URL starts with “/api/v1” to our backend, while all the others are routed to the frontend service.

Note the way Kubernetes heavily leverage reference by name instead of by another hardcode/possibly changing parameter(e.g. IPs). Services are targeted in this way by ingress rules, as for the above example, and the same is done by services targeting pods with the selector keyword. In case a service is reconfigured, it might change its internal cluster IP but as long as it keeps its name, the requests get routed correctly.

However, if no pod is “ready” to accept traffic the request properly routed will “fall in the void” and won’t get a response.

References