This is the second of a two part article about building centralised configuration with Spring Cloud Config Server. In this post we’ll take the the two Spring Boot services created in part one and run them on Kubernetes.

We’ll initially deploy to a local cluster before stepping things up and deploying to Azures manged Kubernetes Service, AKS. By the end of this post you should have two Spring Boot services deployed to an AKS cluster as shown in the diagram below.

Source Code

The full source code for this post (and part one) is available on Github. If you want to get up and running quickly you can pull the code and follow the instructions in the REDME.

Pre Requisites

To build and run the sample source code you’ll need Java, Docker and a local Kubernetes install. I use minikube locally, so if you want to follow the exact steps in this article you’ll need the same. If you’re using something other than minikube you’ll still be able follow along, you’ll just need to be familiar with whatever local Kubernetes install you’re using.

Kubernetes Resources

Later we’re going to create a Kubernetes  Deployment and Service object for both the Config Service and Config Consumer Service. Before we do anything else, lets take a quick look at these objects and what they actually do.

Deployment

A Kubernetes Deployment object describes the desired state of an application running in a cluster. When you define a Deployment object you specify one or more container images and the number of instances you want to run. When a Deployment is created on the cluster, it creates Pods for running the containers. For example, if we create a Deployment that specifies an nginx image with 3 replicas, then Kubernetes will create 3 Pods, each one running an nginx container. If for some reason one of these Pods die, Kubernetes will recognise that the number of running Pods is less than the specified value in the Deployment. Kubernetes will then take action and create a new Pod, ensuring that the actual state is the same as the desired state described by the Deployment.

Service

A Kubernetes Service object provides a stable network address for accessing a group of pods. The Service maintains a list of active Pod IP addresses and load balances incoming requests to those Pods. This means that clients don’t need to worry about maintaining a list of active Pod replicas and their IP addresses. Instead, clients simply call the Service object and the Service takes care of routing the request to one of the Pod replicas.

There are 3 mian types of Service.

  • ClusterIP – is the default Service type and it exposes the Service on an IP accessible only within the cluster (hence Cluster IP)
  • NodePort – is accessible on each nodes IP, using a fixed port. This allows a Service to be accessed from outside the cluster using the nodes IP and the node port. From example Node_IP:Node_Port.
  • LoadBalancer – is used by cloud providers like AWS and Azure to stand up an internet facing load balancer for your cluster.

We’re going to create two Service objects, one for the Config Service and one for the Config Consumer Service. For the Config Service we’ll create a ClusterIP Service object. As shown in the architecture diagram earlier, this will be used to route traffic within the cluster, from the Config Consumer Service to the Config Service.

For the Config Consumer Service we’ll create a LoadBalancer Service so that we can access the Config Consumer Service from outside the cluster. When running locally we’ll use minikube to access the LoadBalancer Service. When we deploy to AKS, an internet facing load balancer will be stood up for the LoadBalancer Service and we’ll be able to access it over the internet.

Config Service

In this section we’ll define the Deployment and Service objects for the Config Service.

Deployment Object

Below is the Config Service Deployment definition. I’ll explain each section in detail below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: config-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: config-service
  template:
    metadata:
      labels:
        app: config-service
    spec:
      containers:
        - name: config-service
          image: briansjavablog/config-service:k8
          ports:
            - containerPort: 8888
          readinessProbe:
            httpGet:
              path: /actuator/health
              port: 8888
            initialDelaySeconds: 20
            timeoutSeconds: 10
            periodSeconds: 3
            failureThreshold: 2
          livenessProbe:
            httpGet:
              path: /actuator/health
              port: 8888
            initialDelaySeconds: 30
            timeoutSeconds: 2
            periodSeconds: 8
            failureThreshold: 1
  • kind indicates the type of object being created
  • metadata.name is the name of the Deployment. It can be used to inspect the object using kubectl.
  • spec.replicas specifies the number of Pod replicas Kubernetes will create, in other words, the number of instances of the Config Service we want to run.
  • spec.selector.matchLabels is used to specify which Pods are managed by this Deployment.
  • The Pods are labeled as app: config-service using template.metadata.labels. This corresponds to the label specified above in spec.selector.matchLabels.
  • template.spec.containers lists the containers to run in the pod. containers[0].name specifies the name of the container, cofig-service in this instance.
  • containers[0].image specifies the name of the image to run as a container. The Dockerfile for creating the briansjavablog/config-service:k8 image is described in part one.
  • the Config Service is exposed from the container on ports.containerPort
  • container.readinessProbe and container.livenessProbe are used to configure endpoints that are called by Kubernetes to verify the health of the Config Service.
  • readinessProbe defines the endpoint Kubernetes will call after a container starts. A successful response indicates the container is ready to receive requests.
  • livenessProbe defines the endpoint Kubernetes will call periodically to check the health of a container. A successful response indicates the container is healthy and can continue receiving requests.
  • The attributes of the  livenessProbe and readinessProbe are the same and are described below.
  • xxxProbe.httpGet.path is the URI of the HTTP GET health check endpoint running in the container. This points at the Spring Boot health check endpoint.
  • xxxProbe.httpGet.port specifies health check port 8888
  • initialDelaySeconds tells Kubernetes how long to wait, in seconds, before calling the endpoint
  • timeoutSeconds is the number of seconds Kubernetes will wait for a response before timing out.
  • periodSeconds defines how often (in seconds) Kubernetes should perform the probe
  • failureThreshold defines the number of failed probes before Kubernetes considers the container unhealthy. If this happens for a liveness prob, Kubernetes will restart the container

Service Object

Below is the Kubernetes Service definition for the Config Service. I’ll explain each section in detail below.

apiVersion: v1
kind: Service
metadata:
  name: config-service
spec:
  ports:
    - protocol: TCP
      port: 8888
      targetPort: 8888
  selector:
    app: config-service
  • kind indicates the type of object being created
  • metadata.name is the name of the Service
  • spec.ports defines the exposed port of the Service object and the port of the Pod that the Service will route traffic to.
  • spec.ports.protocol specifies the protocol used by the Service. You can find the various options here.
  • port is the port that the Service is accessible on. Anything calling the Config Service Service calls it on port 8888.
  • targetPort is the port used to call the container. This value should be the same as  containerPort specified in the Deployment earlier. The Service receives requests on port 8888 and routes them to one of the Config Service containers on port 8888.
  •  selector.app defines the set of Pods that the Service will route traffic to. This value aligns with the selector specified in the Deployment earlier.

Note that we didn’t specify spec.type. As a result the type defaults to ClusterIP.

Config Consumer Service

In this section we’ll define the Deployment and Service objects or the Config Consumer Service. Note that theses objects are very similar (particularly the Deployment) so I’m not going to repeating the attribute descriptions above.

Deployment Object

Below is the Config Consumer Service Deployment definition.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: config-consumer-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: config-consumer-service
  template:
    metadata:
      labels:
        app: config-consumer-service
    spec:
      containers:
        - name: config-consumer-service
          image: briansjavablog/config-consumer-service:k8
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              path: /actuator/health
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 2
            periodSeconds: 3
            failureThreshold: 2
          livenessProbe:
            httpGet:
              path: /actuator/health
              port: 8080
            initialDelaySeconds: 70
            timeoutSeconds: 2
            periodSeconds: 8
            failureThreshold: 2

Service Object

Below is the Config Consumer Service Service definition.

apiVersion: v1
kind: Service
metadata:
  name: config-service-consumer
spec:
  type: LoadBalancer
  selector:
    app: config-consumer-service
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 30008

Note that there is one fundamental difference between the Service above and the Config Service  Service defined earlier. The Config Service Service is a ClusterIP, which means its IP is only accessible within the cluster.  The Service above is a Loadbalancer which means its accessible from outside the cluster. Later when we deploy to AKS, Azure will stand up an internet facing load balancer and expose this Service on a public IP. This will allow us to test the Config Consumer Service by calling its public facing API.

Deploying to a Local Cluster

This section describes how to deploy your services to a local minikube cluster. Begin by starting minikube with minikube start. You should see a start up sequence like this.

If you look at the sample code you’ll see a file called config-server-cluster.yml containing the Deployment and  Service objects we defined earlier. Using kubectl you can apply the contents of  config-server-cluster.yml to your local cluster.

kubectl apply -f config-server-cluster.yml

If everything goes to plan you should see the resources created as follows.

Viewing the Resources

To check the status of the resources that were created you can run kubectl get all. This will list all the objects created in the cluster and provide a summary status for each.

As you can see from the snippet above, the two Deployment and Service objects were created as expected. Note that two ReplicaSets were also created, even though these were not explicitly defined in the manifest. So where did these ReplicaSets come from?

A Deployment creates a ReplciaSet object to ensure that the required set of Pod replicas are running at any given time. Remember that we declared 2 replicas in both Deployment objects earlier. The ReplicaSet is responsible for making sure that the requested number of Pods is running. In the event of a Pod failure, its the ReplicaSet that ensures a new Pod is created.

LoadBalancer Service

You may have noticed that the External-IP of the config-service-consumer-lb is marked as pending. In a cloud environment like Azure or AWS you’ll see status pending when the Cloud provider is spinning up an internet facing load balancer. However, in a local cluster that’s not going to happen, so how do we access config-service-consumer-lb ?

Thankfully minikube allows you to expose a LoadBalancer Service from the cluster using the minikube tunnel command.

After running minikube tunnel, in a new window run kubectl get service. You’ll see that config-service-consumer-lb now has an EXTERNAL-IP 127.0.0.1 assigned.

We can use this IP to call the Config Consumer Service from outside the cluster.

Troubleshooting Hint

If there was a problem creating any of the objects, you’ll need to drill in and figure out what’s going on. You can inspect objects using the describe command and the name of the object you want to inspect. For example running kubectl describe pod/config-consumer-service-746586bf77-w7pbn will provide detailed metadata for the specified Config Consumer Service Pod. As well as a view of the objects configuration, you may also be able to see useful event data associated with the object.

Testing the Config Consumer Service

Its time to test the Config Consumer Service by calling the timeout-config endpoint with curl 127.0.0.1:8080/timeout-config. You should see a JSON response containing the sample config as follows.

This proves that Config Consumer Service and the Config Service are stood up in the cluster and we have end to end connectivity. If you’ve made it this far, good job!

Deploying to Azure AKS

To deploy to AKS you’ll need an Azure account and the Azure CLI installed. If you’ve pulled the source code you’ll see a script called createAKSClusterAndDeploy.sh in the scripts directory. This script uses the Azure CLI to create a cluster and install the Config Service and Config Consumer Service. You’ll need to make sure you’ve authenticated the CLI by running az login, before running the script.

#!/bin/sh

RESOURCE_GROUP=config-service-demo-resource-group
CLUSTER_NAME=config-service-demo-cluster
LOCATION=ukwest
az group create --location $LOCATION --resource-group $RESOURCE_GROUP
az aks create --resource-group=$RESOURCE_GROUP --name=$CLUSTER_NAME --node-count=1 --generate-ssh-keys --node-vm-size Standard_B2s --enable-managed-identity
az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP
cd ../Kubernetes
kubectl apply -f config-server-cluster.yml

The script is pretty straight forward and essentially does 3 things.

  • az group create creates a new Resource Group in the region specified
  • az aks create creates an AKS cluster with one node using VM type Standard_B2s
  • az aks get-credentials pulls back the access credentials for the cluster you just created. The credentials are saved to .kube/config so that they’re available to kubectl.
  • kubectl apply -f config-server-cluster.yml installs the contents of the manifest in the cluster.

The script takes a few minutes to run and when it finishes you should see output like the following.

Run kubectl get all to list the resources that were created. You’ll notice that the config-service-consumer-lb Service has a public IP.

You should be able to access the Config Consumer Service via the public IP of the config-service-consumer-lb Service.  Running curl 51.104.42.1:8080/timeout-config should return the JSON payload shown below.

If you’ve made it this far, well done. You now have the Config Service and the Config Consumer Service running on AKS. Here’s a reminder of what you’ve deployed.