Hello…I’m back again but I’m having to change direction a little bit from my planned blog for this month. Initially, my follow-up to my previous blog was to be about going through a walkthrough of deploying workloads to Azure Stack HCI that has AKS deployed to it. Basically, putting it all together after you have AKS up and running. However, I’ve hit a snag with the latest AKS Preview and have been unable to get it deployed to my 2-node Azure Stack HCI instance.
So, with that, I’m going to change it up a little and shift my focus to Deploying the AKS Engine to Azure Stack Hub followed by deploying a Kubernetes cluster to it. I’m a bit surprised, really though, as my Azure Stack Hub ASDK instance – a single node instance of Hub that allows you do some functionality testing and get familiar with how hub works – is still running as I’ve not done much with it as of late and it has been unable to start in my nested virtualization environment. Anyway, spent a few minutes this morning working through the startup issues and got it to boot up and here we are…Recovering from that may be a story for another day.
Before I jump into the deployment process here for the AKS Engine, I did want to note what is different here versus what I did in my last blog with deploying AKS to Azure Stack HCI. The AKS Engine is a command-line tool that allows you to deploy and manage a Kubernetes cluster in both Azure Public and Azure Stack Hub. You use this to create, upgrade, and scale Azure Resource Manager (ARM) native clusters. On Hub, this can be used to deploy a cluster in both connected or disconnected environments. Understand there are difference between running the AKS Engine in Azure Public and onPrem with Azure Stack Hub. Be sure to check out those difference to have the right expectations on what you can and cannot do with the AKS Engine on Hub.
With that out of the way, so let’s do this…
We need to first get our prerequisites inline:
Azure Stack Hub Prerequisites that will need to be carried out by a Cloud Operator. This may be you if you are using an ASDK for this exercise.
|Azure Stack Hub 1910+||Yes||AKS Engine requires Azure Stack Hub 1910+. Currently the latest version is 2008.|
|Linux Custom Script Extension 2.0.6+||Required||This be available in your subscription. If it isn’t, reach out to your Cloud Operator to have them add it|
|AKS Base images||Required||Like the custom extension scripts, this needs to be available to your subscription so have your Operator add it. If you are the Cloud Operator, refer here to get it added to the Marketplace.|
|Service Principle Identity (SPN)||Required||Note that if you are using an AAD SPN, Internet access is required from the VMs in the Kubernetes cluster so that the SPN can be authenticated with Azure AD. If Internet is not available, the Kubernetes cluster will not be functional.|
|Assign SPN to the Contributor Role||Required||You Azure Stack Operator will need to complete this action before you get started. Refer to Assign a role for more instructions|
The next Prerequisites will be done by you:
|Azure Stack Hub Subcription||Yes||To deploy any workloads in Hub, you need a subscription from the tenant portal. Go to Azure Stack Hub Subscription for more info|
|Resource Group||Optional||I’ll be using MySub-rg for this exercise. You can create a new one or use an existing one from your tenant subscription.|
|Private Public Key||Required||Refer to SSH Key Generation for instructions to create a public/private key pair.|
This blog assumes you have already reviewed and implemented the above Prerequisites so will not be walking through those aspects here.
We’ve got all the prerequisites secured and we are ready to go. First, we will need either a Linux or Windows VM that we’ll use to host the AKS engine in order to deploy and manage a Kubernetes cluster. For this exercise, I’ll be using a Linux VM based on Ubuntu 18.04 LTS. This is available in my Hub marketplace. So, I’ll create a new resource for this VM using the portal as you can see below. If you are using Windows, the steps will be the same here.
For the creation of the VM, you will need to specify a few parameters that are specific to your environment. Note, that you will need to provide your SSH public key here that was created as part of the requirements at the start of this. You can use an SSH password but that is not as desirable as it offers less security as a certificate authentication practice.
Choose the Azure VM type/size that you need to support your workloads. For my case in this exercise, I’m taking the default (denoted by the star)
On the next screen, configure to your needs but I chose the default except for the exposed public inbound ports since I need to connect to this VM by SSH. I allowed for the SSH port to be opened.
Once everything is ready to go and you’ve reviewed it for accuracy, click OK to begin the Linux VM deployment.
This will take a bit of time but once complete, you will see it listed in the Virtual Machines blade (found on left menu pane) and this new VM should reflect that it is running.
I now have the VM up and running. You will connect to it and will need to determine what version we will use based on the AKS Base Image that you pulled down as part of the prerequisites. Again, in my case, the AKS Base Image is using the Ubuntu 18.04 LTS version so I need to ensure the AKS Engine version I use aligns to that. This can be determined using the table found here. Since I’m using the latest versions available for both Hub (2008) and Ubuntu 18.04 LTS, my AKS Engine version will be V0.60.1.
Having this info, we will now issue the following command from my Linux console:
|curl -o get-akase.sh https://raw.githubusercontent.com/Azure/aks-engine/master/scripts/get-akse.sh chmod 700 get-akse.sh ./get-akse.sh –version v0.60.1|
Once we have completed the above command, let’s verify the installation is OK.
- Connect to your client VM
- Run the following Command:
- Next, if you are using the ASDK and, thus, a self-signed certificate, you need to explicitly add the root certificate to the trusted certificate store of the machine. You can find the root certificate in the VM at this directory: /var/lib/waagent/Certificates.pem.
Copy the certificate file with the following command:
|sudo cp /var/lib/waagent/Certificates.pem /usr/local/share/ca-certificates/azurestacka.crt sudo update-ca-certificates|
OK…we’ve got our AKS-engine installed on our host and our self-signed certificate trusted; we are now ready to deploy a Kubernetes cluster with the AKS engine on Hub.
To do this, we will first need to define our cluster specification that uses a JSON file for our API model. This API model is what is used by the AKS Engine to create the cluster specification.
- Let’s grab the API model file from:
|curl -o kubernetes-azurestack.json https://raw.githubusercontent.com/Azure/aks-engine/patch-release-v0.60.1/examples/azure-stack/kubernetes-azurestack.json|
- Open the downloaded JSON file, kubernetes-azurestack.json
- Find the appropriate orchestratorRelease and orchestratorVersions in this file for the Kubernetes version that your AKS-Engine supports. For mine, it is version 1.17 and 1.17.17, respectively.
- From there, find the customCloudProfile in that section and provide the URL to the tenant portal. In my case, since this the ASDK, it is https://portal.local.azurestack.external. If you are using ADFS in your Hub deployment, you will need to also modify the identitySystem as well to reflect this but otherwise, you can leave it blank.
- Next, move down to the masterProfile to set the following fields in the snapshot:
A couple of notes on these fields…
- dnsPrefix – This needs to be a unique string that will identify the hostname of VMs. I’ve used the name of my resource group.
- Count – This is the number of masters you want for your deployment. The minimum is 3 for any HA deployment but for non-HA/Test environments, 1 is sufficient.
- vmSize – Enter a supported Azure Stack Hub VM size
- distro – Define what Linux distro you are using that maps to the correct AKS-Engine and Kubernetes version you are using. In my case, this is Ubuntu 18.04
- Then we want to modify the section right below this for agentPoolProfiles
More info on these specific fields:
- Count – Enter the number of agents you want for your deployment. The maximum of nodes to use per subscription is 50.
- vmSize – Enter a supported Azure Stack Hub VM size
- distro – like the previous entry, provide the linux distro you will be using.
- Finally, in the linuxProfile section:
The public key is what was created as part of the prerequisites at the start of this blog. Note: This private key must have no spaces and must be pre-faced with ssh-rsa like in the output above.
Now, we’ve prepared our API model specific via the downloaded JSON file, mapping it to the requirements for our environment. Next up, let’s get that cluster created.
Before you start the actual deployment, it’s always good to ensure that Hub is running correctly so you can do this yourself if you are the Cloud Operator (or running the ASDK) or you will need to ask your Cloud Operator to do this if you don’t have access to Hub directly.
Assuming you do have the required Cloud Operator status, you run Test-AzureStack Powershell command from the Privileged Endpoint. For more information on connecting to the Privileged Endpoint, please refer to “Use the privileged endpoint in Azure Stack Hub”
Let’s issue the following command to deploy our Kubernetes Cluster:
|aks-engine deploy \ |
–azure-env AzureStackCloud \
–location local \ If using ASDK, this should reflect local, otherwise should be location for your multi-node stack
–resource-group MySub-rg \ Modify to reflect your Resource Group
–api-model ./kubernetes-azurestack.json \
–output-directory MySub-rg \ This is going to be in my Resource group that was specified above
–client-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \ This is the SPN application id you created as part of the Prerequisites
–client-secret xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \ SPN client-secret that was created as part of the Prerequisites –subscription-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \ Sub ID to your tenant
–identity-system adfs Optional: Only required if using ADFS, otherwise, you can omit.
Once that completes, we should have an active Kubernetes cluster so let’s verify it is working as expected.
At completion, you should see that the deployment succeeded like below:
To verify it’s working as expected, we need to grab the public IP address of any one of the master nodes VMs found in the Hub portal:
Then, from a machine that has access to your Hub instance, connect via SSH into this master node using an SSH client, like PuTTy. For the SSH username, use the default ‘azureuser’ or the one that you specified in the API model configuration file. I used sysadmin for my case. Also, use the private key that you provided for the deployment of the cluster.
If successful, you should now be at the Master node’s prompt where you can issue the following commands to create a sample deployment of a Guestbook application called mongo-deployment. Note: This only works for connected Hub instances.
|kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/guestbook/mongo-deployment.yaml|
Query the list of pods:
|kubectl get pods|
Then you can also view the deployment logs using:
|kubectl logs -f <pod name>|
You can also issue the command:
As well as review the node states:
|kubectl get nodes|
That’s it! You have successfully deployed the AKS-Engine and a Kubernetes Cluster to your Azure Stack Hub instance. You’ve also deployed a sample application to the cluster. You can clean up the sample application using kubectl as well:
|kubectl delete deployment -l app=mongo|
A couple of things to consider now that you have it running. Since this is not AKS in the sense that it is a managed service like you would expect to see in Azure Public. Some general housekeeping is up to you, the owner, to do. One of those chores is to rotate your service principal secret regularly. For more of a complete reference to the AKS Engine, please refer to the online documentation at GitHub.
I hope you found this information useful. Check back soon for a follow-up here where I’ll configure the Kubernetes Dashboard which is included with Kubernetes that provides a clean way to view your cluster status. Also, I’ll be walking through workload deployments against both Azure Stack Hub and Azure Stack HCI.