The quick guide to creating a production Kubernetes cluster (version 2)

This is a new, updated and simplified version of my original quick guide to Kubernetes.

Ever find yourself needing a Kubernetes cluster quickly to test an idea, run an experiment or even just because you want to get your app in production ASAP?

Being able to create a Kubernetes cluster quickly is really useful whether you are writing books and blog posts on it (like me) or even if you are just wanting to learn Kubernetes and need an empty cluster to play with.

This post shows you the simplest and quickest way to build a Kubernetes cluster.

Kubernetes!

First things first

Do you really care where your Kubernetes cluster is hosted? Or do you just want Kubernetes for experimententation, testing and development?

Good news, since the first version of this post it's ridiculously easy to install your own local Kubernetes instance. In fact, you probably already have one.

If you have Docker Desktop installed already, then you already have a local Kubernetes, you just need to switch it on. To learn more, please see my quick guide to your local Kubernetes instance.

If you really do want to build a cloud-based Kubernetes cluster, please read on.

Why Azure?

Kubernetes is available on all major platforms and some minor platforms.

I recommend creating Kubernetes clusters on Azure because that's more or less where it's easiest, but also very flexible and configurable. You can create clusters via the Azure user interface or via the Azure CLI tool. In this post we'll use the Azure CLI tool, because I actually think that's easier to use than the UI!

Azure have $200 credit for new sign ups, so you can play with this stuff for a month before you start paying real money for it. Just be sure to tear it down afterward, otherwise you'll have to start paying for it!

Digital Ocean is also good because it's fairly easy and much cheaper.

Azure CLI

In this post we use the Azure CLI tool from the terminal to construct a Kubernetes cluster. It's a nice tool, pretty straight forward and you can do everything without leaving the terminal.

The nice thing about working with the terminal is that we can build shell scripts to do this kind of thing. So after reading this post you can put the command you use in a shell script (or a batch file on Windows) and use that to quickly instantiate a new cluster anytime you need one.

To learn the Azure UI and then how to create a Kubernetes cluster in code with Terraform, please checkout my book Bootstrapping Microservices.

Pre-requisites

Create an Azure account: https://portal.azure.com/.

Install Azure CLI tool: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest

Setup

Now open a terminal and check that the Azure CLI tool is installed:

az --version

We see something like this at the top of the output:

azure-cli 2.40.0

To initiate log in to our Azure account:

az login

Next we follow the instructions in our terminal to authenticate with Azure.

After log in we can invoke Azure CLI commands to create, destroy and update resources in our Azure account.

Check your account

Before proceeding, let's check our account:

az account list

By itself that command outputs in JSON format. If we only just signed up to Azure or if we only have a single account, we should see only a single account in the list.

If we do happen have many accounts, we can make the output more readable by using the table output format:

az account list --output table

Note which of your accounts has IsDefault set to True. If we only have one account it should be that one. The default account is the one that we'll be working with. When we create our Kubernetes cluster and other resources, they will appear in this "default" or current account.

We need to be sure the default account is the one we actually want to work with. If we have, for example, work accounts in our list we probably don't want to be making experimental Kubernetes clusters in them!

Here's another way to see our default Azure account:

az account show

If that isn't the account we want to use for our experimental work, we should change it. Use az account list --output table to pick the account to use. Take note of the SubscriptionId for that account. We must provide this ID to set an account as the default.

We use the following command to set our default account:

az account set --subscription <your-subscription-id>

Please replace <your-subscription-id> with the particular subscription ID for your account.

After changing the default account, we can check one last time to make sure we are using the account that we wanted:

az account show

What location?

Before deciding which version of Kubernetes to install, we should know the location of the data center where we'll create our cluster.

Get a list of all location using this command:

az account list-locations

Actually that list is pretty long. Let's use JMESPath to pluck a subset of the data to make it easier to read:

az account list-locations --query "[][name, displayName]"

Still difficult to read though, so let's use the table output format to improve that:

az account list-locations --query "[][name, displayName]" --output table

Now we should see a readable list of locations. Pick a location and take note of the lower case whole word version of its name.

To pick out particular locations, for example I'm looking in Australia, use grep to filter the list:

az account list-locations --query "[][name, displayName]" --output table | grep aus

For this blog post I've picked australiaeast as the location for my cluster.

Get help

The Azure CLI has great built in help and I encourage you to explore it while you are using it.

To start:

az --help

You can also use help for each command and subcommand:

az account --help
az account list-locations --help

The help tells us the purpose of the command and all the arguments we can use with it. This can be very instructive, especially with the examples they provide.

Try adding --help to each of the commands listed below to learn more about each one as you continue working through this blog post.

What version?

With our location selected, we can now check out the versions of Kubernetes available there:

az aks get-versions --location australiaeast

Just replace australiaeast with your prefered location.

Again we are presented with a wall of JSON data. We'll use JMESPath to pluck the most recent version with table output for readbility:

az aks get-versions --location australiaeast --query "orchestrators[].orchestratorVersion" --output table

Or to get just the last version in the list (the latest version), use a -1 inside the array brackets. This is hardly necessary, but it's a nice feature of JMESPath:

az aks get-versions --location australiaeast --query "orchestrators[-1].orchestratorVersion" --output table

Take note of the latest version number. At the time of writing, the latest version in my location is 1.24.6 so that's the version I'm using for this blog post.

Create a resource group

We need to create an Azure resource group to contain our cluster:

az group create --name my-kub-test --location australiaeast

A resource group is a way to collect and manage groups of cloud resources.

I've called my resource group my-kub-test, you should probably choose a better name. Prefer a name that indicates your intended use of the resource group. Write down the name of your resource group, we need for subsequent commands.

We can test that your resource group was created with this command:

az group list --output table

We should see our new resource group in the output. If this is the first resource group we create, that's all we'll see in the list.

Create your Kubernetes cluster

We are now ready to create our Kubernetes cluster. Here's the command:

az aks create --resource-group my-kub-test --name test-kub-cluster --location australiaeast --kubernetes-version 1.24.6 --generate-ssh-keys

I've called my cluster test-kub-cluster. Again you should choose a name of your own cluster that's meaningful to your intended purpose for it. Write down the name of your cluster, we need it for subsequent commands.

Be sure to plug in your own names for the resource group and the cluster. Also the location where you want to host it and the version of Kubernetes that you selected earlier.

The option --generate-ssh-keys generates SSH keys into the .ssh directory under your user/home directory for access to the virtual machine(s) running your cluster. If you plan to keep this cluster around, please backup these keys and keep them safe.

Now we must wait! This part can take some time, make a coffee or two.

Tip: The above command defaults to creating a Kubernetes cluster with three nodes. This is great for a fault tolerant production cluster, but it's also 3x expensive. To reduces costs for experimenting/developing use --node-count 1, setting the number of nodes to just one. It won't be fault tolerant, but it will be much less expensive to run.

Check your cluster

Let's check our cluster has been created. First, we list our resource groups:

az group list --output table

We should have a couple of extra resource groups now. Of course there's my-kub-test (or whatever you called your resource group) that we explictly created earlier and contains our Kubernetes cluster.

Also note there's a couple of other resource groups that have been automatically created. The first has a long generated name like MC_my-kub-test_test-kub-cluster_australiaeast. That name might be different for you because it's generated from the name of the resource group, the name of the cluster and the selected location. This group contains additional resources created for the managed cluster. We'll have a look at it's resources in a moment.

There's another group NetworkWatcherRG that is automatically created. This has something to do with debugging and trouble shooting for your virtual network (which was created automatically for you).

Now let's list the resources in our my-kub-test resource group:

az resource list --resource-group my-kub-test --output table

Be sure to plug in the name for your own resource group here.

We should see just one resource (unless we started with a resource group that already had existing resources in it). That resource is our Kubernetes cluster.

Now let's take a look at the contents of the automatically generated MC_my-kub-test_test-kub-cluster_australiaeast resource group. Just be sure to plug in the name of the resource group that was generated in your Azure account (retrieved by az group list --output table).

az resource list --resource-group MC_my-kub-test_test-kub-cluster_australiaeast --output table

Now we see a bigger list of resources. These are all the resources that were automatically created to support our Kubernetes cluster. In the list are things like virtual machines, virtual disks and virtual networks. This is all the stuff we are paying for! (after our $200 credit runs out). Be sure to delete it when you are done! Also try not to manually tweak anything in this list! If we do that we can easily break our cluster.

To learn more about the automatically generated resource group read this:
https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks

Interface with your cluster

What good is a Kubernetes cluster if we can't interact with and deploy our application? So let's get setup for that.

For this you need the Kubectl tool installed: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Note If you have Docker Desktop installed and the local Kubernetes enabled you probably already have Kubectl. To check run kubectl version and see if there's a recent version installed already.

Tip If you don't already Kubectl installed here's an easy way to install it using the Azure CLI:

az aks install-cli

After installing Kubectl, we can connect it to our cluster by downloading credentials through the Azure CLI:

az aks get-credentials --resource-group my-kub-test --name test-kub-cluster

Next, we should check the connection to our cluster by invoking practically any Kubectl command:

kubectl get nodes

We should see a list of the nodes in our cluster! And we can now use Kubectl to manage our cluster. Happy days.

Saving Kubernetes configuration for later

At this point we might want to save our Kubectl configuration so that we can use it our automated deployment pipeline (I'll write about this in more detail in a separate blog post, watch this space).

Because the command az aks get-credentials merges configuration for our cluster with the existing configuration, we might want to delete our local configuration first.

WARNING Don't do this if you have local configuration that you can't lose.

Delete our local configuration:

rm -r ~/.kube

(On Windows you might have to delete the directory c:\Users\<Username>\.kube).

Now when we download the Kubernetes cluster credentials they'll be the only credentials in our local configuration:

az aks get-credentials --resource-group my-kub-test --name test-kub-cluster

Again, test that we can interact with our cluster:

kubectl get nodes

Now we know that the only configuration that is in ~/.kube/config is the configuration for our new cluster.

To use it as as a GitHub Secret (as input to a GitHub Actions deployment workflow) we should base64 encode the Kubenretes configuration file:

cat ~/.kube/config | base64

We can now copy the resulting value to a GitHub Secret. I usually call it KUBE_CONFIG.

Creating a container registry

Ok so we have a cluster, but to really test that it works we should deploy something to it.

However, before that we need a container registry to publish our Docker images then deploy them to our cluster.

Let's create a container registry:

az acr create --resource-group my-kub-test --name myregistry --sku Basic --admin-enabled

Ah, but you can't just use myregistry as the name of your container registry. The name must be unique across Azure, so please replace myregistry with another name that is unique for your project. The name must be simple, lowercase and can't contain any hypens (I honestly don't know why it's so limited).

Then, check the container registry was created ok. List the resources in the resource group and scan through it make sure the container register is there:

az resource list --resource-group my-kub-test --output table

Linking the cluster and the container registry

There’s one more thing before we do our test deployent. We should connect our container registry and our Kubernetes cluster so that the cluster can pull images from the registry without having to authenticate.

Here’s the command to “attach” our container registry to our cluster:

az aks update --name test-kub-cluster --resource-group my-kub-test --attach-acr myregistry

Just be sure to use the name of your container registry in place of myregistry, otherwise this ain't working for you.

If we connect our cluster and container registry it makes things so much easier because now we don’t have to encode registry authentication credentials in our Kubernetes deployment configuration. It makes the setup of our deployment much simpler and it’s perfectly safe to do this, given that both our container registry and our Kubernetes cluster are both resources that we control and that we trust, so we can safely “pre-authenticate” the cluster to talk to the container registry. This is a simplification that works great for a simple setup like this, but it might not work for you in production depending on where and how you are hosting your cloud resources and what security model you are using.

Note that if you don’t attach your container registry to your cluster then your cluster will fail to pull your microservice’s image from the container registry. You can also make this work by encoding the container registry authentication in your deployment configuration file, but that’s more complicated and unnecessary when you can just attach your registry to your cluster. If you later see an error like ErrImagePull or ImagePullBackOff that’s an indication that you didn’t get this step right.

Doing a test deployment

Ok let's get something deployed so that we can test our shiny new Kubernetes cluster.

Here's an example project we can use: https://github.com/ashleydavis/nodejs-example.git

To deploy this example, you'll need Node.js and Docker Desktop installed.

Clone the example repo:

git clone https://github.com/ashleydavis/nodejs-example.git

Then change to the directory and install dependencies:

cd nodejs-example
npm install

Set environment variables for connecting to our container registry:

export CONTAINER_REGISTRY=<your-container-registry>
export REGISTRY_UN=<username-for-your-registry>
export REGISTRY_PW=<password-for-your-registry>
export VERSION=1

Note: You can find these details for connecting to your container registry by looking at its page in the Azure Portal.

The example contains shell scripts for building, publishing and deployment. Feel free to peak in the shell scripts to see how they work (they are pretty simple).

Build the Docker image:

./scripts/build-image.sh

Publish the Docker image to our container registry:

./scripts/push-image.sh

Run the deployment, expanding the templated Kubernetes configuration (using Figit) and piping the result to kubectl apply:

./scripts/deploy.sh

After deployment completes, check the status of the pod and deployment:

kubectl get pods
kubectl get deployment

To find the IP address allocated to the web server, invoke:

kubectl get services

Pull out the EXTERNAL-IP address for the nodejs-example service and put that into your web browser. You should see the hello world message in the browser.

To check console logging for the Node.js app:

kubectl logs <pod-name>

Be sure the actual name of the pod for <pod-name> that was in the output from kubectl get pods.

Tear down

When done, don't forget to tear down our cluster and all the resources that were created.

Azure resource groups make this easy, just delete the main resource group my-kub-test:

az group delete --name my-kub-test

When you delete the resource group with your cluster the other one that was automatically created is also deleted. That's nice.

Unfortunately the group NetworkWatcherRG doesn't get cleaned up, but you can delete that explictly if you like:

az group delete --name NetworkWatcherRG

Conclusion

If you followed along with this post you created (and then destroyed) a Kubernetes cluster.

This is quick and simple way to create a managed Kubernetes cluster for those times when you want one to experiment on and try out new stuff.

Have fun with your Kubernetes cluster!

Resources: