bringing current cloud talk to you through my experiences

Posts tagged “Azure Stack

Deploying the AKS-Engine to Azure Stack Hub

Hello…I’m back again but I’m having to change direction a little bit from my planned blog for this month. Initially, my follow-up to my previous blog was to be about going through a walkthrough of deploying workloads to Azure Stack HCI that has AKS deployed to it. Basically, putting it all together after you have AKS up and running. However, I’ve hit a snag with the latest AKS Preview and have been unable to get it deployed to my 2-node Azure Stack HCI instance.

So, with that, I’m going to change it up a little and shift my focus to Deploying the AKS Engine to Azure Stack Hub followed by deploying a Kubernetes cluster to it. I’m a bit surprised, really though, as my Azure Stack Hub ASDK instance – a single node instance of Hub that allows you do some functionality testing and get familiar with how hub works – is still running as I’ve not done much with it as of late and it has been unable to start in my nested virtualization environment. Anyway, spent a few minutes this morning working through the startup issues and got it to boot up and here we are…Recovering from that may be a story for another day.

Before I jump into the deployment process here for the AKS Engine, I did want to note what is different here versus what I did in my last blog with deploying AKS to Azure Stack HCI. The AKS Engine is a command-line tool that allows you to deploy and manage a Kubernetes cluster in both Azure Public and Azure Stack Hub. You use this to create, upgrade, and scale Azure Resource Manager (ARM) native clusters. On Hub, this can be used to deploy a cluster in both connected or disconnected environments. Understand there are difference between running the AKS Engine in Azure Public and onPrem with Azure Stack Hub. Be sure to check out those difference to have the right expectations on what you can and cannot do with the AKS Engine on Hub.

With that out of the way, so let’s do this…

We need to first get our prerequisites inline:

Azure Stack Hub Prerequisites that will need to be carried out by a Cloud Operator. This may be you if you are using an ASDK for this exercise.

PrerequisiteRequiredNote
Azure Stack Hub 1910+Yes AKS Engine requires Azure Stack Hub 1910+. Currently the latest version is 2008.
Linux Custom Script Extension 2.0.6+RequiredThis be available in your subscription. If it isn’t, reach out to your Cloud Operator to have them add it
AKS Base imagesRequiredLike the custom extension scripts, this needs to be available to your subscription so have your Operator add it. If you are the Cloud Operator, refer here to get it added to the Marketplace.
Service Principle Identity (SPN)RequiredNote that if you are using an AAD SPN, Internet access is required from the VMs in the Kubernetes cluster so that the SPN can be authenticated with Azure AD. If Internet is not available, the Kubernetes cluster will not be functional.
Assign SPN to the Contributor RoleRequiredYou Azure Stack Operator will need to complete this action before you get started. Refer to Assign a role for more instructions

The next Prerequisites will be done by you:

PrerequisiteRequiredNote
Azure Stack Hub SubcriptionYes To deploy any workloads in Hub, you need a subscription from the tenant portal. Go to Azure Stack Hub Subscription for more info
Resource GroupOptionalI’ll be using MySub-rg for this exercise. You can create a new one or use an existing one from your tenant subscription.
Private Public KeyRequiredRefer to SSH Key Generation for instructions to create a public/private key pair.

This blog assumes you have already reviewed and implemented the above Prerequisites so will not be walking through those aspects here.

We’ve got all the prerequisites secured and we are ready to go. First, we will need either a Linux or Windows VM that we’ll use to host the AKS engine in order to deploy and manage a Kubernetes cluster. For this exercise, I’ll be using a Linux VM based on Ubuntu 18.04 LTS. This is available in my Hub marketplace. So, I’ll  create a new resource for this VM using the portal as you can see below. If you are using Windows, the steps will be the same here.

For the creation of the VM, you will need to specify a few parameters that are specific to your environment. Note, that you will need to provide your SSH public key here that was created as part of the requirements at the start of this. You can use an SSH password but that is not as desirable as it offers less security as a certificate authentication practice.

Choose the Azure VM type/size that you need to support your workloads. For my case in this exercise, I’m taking the default (denoted by the star)

On the next screen, configure to your needs but I chose the default except for the exposed public inbound ports since I need to connect to this VM by SSH. I allowed for the SSH port to be opened.

Once everything is ready to go and you’ve reviewed it for accuracy, click OK to begin the Linux VM deployment.

This will take a bit of time but once complete, you will see it listed in the Virtual Machines blade (found on left menu pane) and this new VM should reflect that it is running.

I now have the VM up and running. You will connect to it and will need to determine what version we will use based on the AKS Base Image that you pulled down as part of the prerequisites. Again, in my case, the AKS Base Image is using the Ubuntu 18.04 LTS version so I need to ensure the AKS Engine version I use aligns to that. This can be determined using the table found here. Since I’m using the latest versions available for both Hub (2008) and Ubuntu 18.04 LTS, my AKS Engine version will be V0.60.1.

Having this info, we will now issue the following command from my Linux console:

Bash
curl -o get-akase.sh https://raw.githubusercontent.com/Azure/aks-engine/master/scripts/get-akse.sh chmod 700 get-akse.sh ./get-akse.sh –version v0.60.1

Once we have completed the above command, let’s verify the installation is OK.

  1. Connect to your client VM
  2. Run the following Command:
Bash
aks-engine version
  • Next, if you are using the ASDK and, thus, a self-signed certificate, you need to explicitly add the root certificate to the trusted certificate store of the machine. You can find the root certificate in the VM at this directory: /var/lib/waagent/Certificates.pem.

Copy the certificate file with the following command:

Bash
sudo cp /var/lib/waagent/Certificates.pem /usr/local/share/ca-certificates/azurestacka.crt sudo update-ca-certificates

OK…we’ve got our AKS-engine installed on our host and our self-signed certificate trusted; we are now ready to deploy a Kubernetes cluster with the AKS engine on Hub.

To do this, we will first need to define our cluster specification that uses a JSON file for our API model. This API model is what is used by the AKS Engine to create the cluster specification.

  1. Let’s grab the API model file from:
Bash
curl -o kubernetes-azurestack.json https://raw.githubusercontent.com/Azure/aks-engine/patch-release-v0.60.1/examples/azure-stack/kubernetes-azurestack.json
  1. Open the downloaded JSON file, kubernetes-azurestack.json
Bash
nano ./kubernetes-azurestack.json
  1. Find the appropriate orchestratorRelease and orchestratorVersions in this file for the Kubernetes version that your AKS-Engine supports. For mine, it is version 1.17 and 1.17.17, respectively.
  1. From there, find the customCloudProfile in that section and provide the URL to the tenant portal. In my case, since this the ASDK, it is https://portal.local.azurestack.external. If you are using ADFS in your Hub deployment, you will need to also modify the identitySystem as well to reflect this but otherwise, you can leave it blank.
  1. Next, move down to the masterProfile to set the following fields in the snapshot:

A couple of notes on these fields…

  1. dnsPrefix – This needs to be a unique string that will identify the hostname of VMs. I’ve used the name of my resource group.
  2. Count – This is the number of masters you want for your deployment. The minimum is 3 for any HA deployment but for non-HA/Test environments, 1 is sufficient.
  3. vmSize – Enter a supported Azure Stack Hub VM size
  4. distro – Define what Linux distro you are using that maps to the correct AKS-Engine and Kubernetes version you are using. In my case, this is Ubuntu 18.04
  5. Then we want to modify the section right below this for agentPoolProfiles

More info on these specific fields:

  1. Count – Enter the number of agents you want for your deployment. The maximum of nodes to use per subscription is 50.
  2. vmSize – Enter a supported Azure Stack Hub VM size
  3. distro – like the previous entry, provide the linux distro you will be using.
  4. Finally, in the linuxProfile section:

The public key is what was created as part of the prerequisites at the start of this blog. Note: This private key must have no spaces and must be pre-faced with ssh-rsa like in the output above.

Now, we’ve prepared our API model specific via the downloaded JSON file, mapping it to the requirements for our environment. Next up, let’s get that cluster created.

Before you start the actual deployment, it’s always good to ensure that Hub is running correctly so you can do this yourself if you are the Cloud Operator (or running the ASDK) or you will need to ask your Cloud Operator to do this if you don’t have access to Hub directly.

Assuming you do have the required Cloud Operator status, you run Test-AzureStack Powershell command from the Privileged Endpoint. For more information on connecting to the Privileged Endpoint, please refer to “Use the privileged endpoint in Azure Stack Hub”

Let’s issue the following command to deploy our Kubernetes Cluster:

Bash
aks-engine deploy \
–azure-env AzureStackCloud \
–location local \ If using ASDK, this should reflect local, otherwise should be location for your multi-node stack

–resource-group MySub-rg \ Modify to reflect your Resource Group
–api-model ./kubernetes-azurestack.json \
–output-directory MySub-rg \ This is going to be in my Resource group that was specified above
–client-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \ This is the SPN application id you created as part of the Prerequisites
–client-secret xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \ SPN client-secret that was created as part of the Prerequisites –subscription-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \ Sub ID to your tenant
–identity-system adfs  Optional: Only required if using ADFS, otherwise, you can omit.

Once that completes, we should have an active Kubernetes cluster so let’s verify it is working as expected.

At completion, you should see that the deployment succeeded like below:

To verify it’s working as expected, we need to grab the public IP address of any one of the master nodes VMs found in the Hub portal:

Then, from a machine that has access to your Hub instance, connect via SSH into this master node using an SSH client, like PuTTy. For the SSH username, use the default ‘azureuser’ or the one that you specified in the API model configuration file. I used sysadmin for my case. Also, use the private key that you provided for the deployment of the cluster.

If successful, you should now be at the Master node’s prompt where you can issue the following commands to create a sample deployment of a Guestbook application called mongo-deployment. Note: This only works for connected Hub instances.

Bash
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/guestbook/mongo-deployment.yaml

Query the list of pods:

Bash
kubectl get pods

Then you can also view the deployment logs using:

Bash
kubectl logs -f <pod name>

You can also issue the command:

Bash
kubectl cluster-info

As well as review the node states:

Bash
kubectl get nodes

That’s it! You have successfully deployed the AKS-Engine and a Kubernetes Cluster to your Azure Stack Hub instance. You’ve also deployed a sample application to the cluster. You can clean up the sample application using kubectl as well:

Bash
kubectl delete deployment -l app=mongo

A couple of things to consider now that you have it running. Since this is not AKS in the sense that it is a managed service like you would expect to see in Azure Public. Some general housekeeping is up to you, the owner, to do. One of those chores is to rotate your service principal secret regularly. For more of a complete reference to the AKS Engine, please refer to the online documentation at GitHub.

I hope you found this information useful. Check back soon for a follow-up here where I’ll configure the Kubernetes Dashboard which is included with Kubernetes that provides a clean way to view your cluster status. Also, I’ll be walking through workload deployments against both Azure Stack Hub and Azure Stack HCI.


Deploying Azure Kubernetes Service on an Azure Stack HCI OS Cluster

Hopefully you have heard this week’s announcements for Microsoft’s latest Operating System offering; Azure Stack HCI OS is a specialized virtualization operating system, deployed in an integrated system in a hyperconverged model, and delivered as an Azure hybrid service. Delivering as an Azure hybrid service allows the OS to receive the latest and greatest in up to date security, performance, and features that you are already used to when consuming Azure services. With Azure Stack HCI, you can deploy and run Windows and Linux virtual machines in your datacenter or at the edge, using your existing tools, processes, and skillsets while simultaneously extending your datacenter to the cloud, with the ability to integrate Azure capabilities such as Azure Backup, Azure Monitor, Azure Update Management, and Azure Security Center.

Today, we are going to talk about bringing the Azure Kubernetes Service (AKS) directly into Azure Stack HCI. This brings an on-premises implementation of AKS, which automates the management and orchestration of containerized applications at scale. Before we dig into what it looks like to deploy AKS on Azure Stack HCI, let’s first talk about why you would use Kubernetes and how it’s transforming the cloud and datacenter landscape. Of course you can manage a few containers manually using Docker or similar tools, but when you build for any scale beyond a lab, when applications make use of tens, hundreds, or thousands of containers, which is where the need for at-scale management and orchestration comes into play, and where Kubernetes shines.

Kubernetes, in essence, is an open-source orchestrator for automating container management at scale. AKS takes this tool and simplifies the management, deployment, and governance of it both in Azure Public Cloud and on-premises. Within Azure Stack HCI, AKS significantly simplifies and streamlines the deployment of a Kubernetes infrastructure by providing wizards for setting up the infrastructure and integrating it into Azure Stack HCI.

Some of the functionality offered by AKS during preview and initial launch includes:

  • Deployment containerized applications at scale to a cluster of Virtual Machines, known as a Kubernetes cluster, running across the Azure Stack HCI cluster
  • Automatic failover when a node in the Kubernetes cluster fails
  • Deploy and manage both Linux and Windows-based containerized applications
  • Schedule workloads
  • Monitor health
  • Scale up or down by adding or removing nodes to the Kubernetes cluster
  • Manage networking
  • Discover services
  • Coordinate application upgrades
  • Assign pods to cluster nodes with cluster node affinity

There are several important features that AKS brings to Azure Stack HCI that simplifies the process of setting up Kubernetes, including:

  • A Windows Admin Center (WAC) wizard for setting up Kubernetes and its dependencies; kubeadm, kubelet, and a Pod network add-on
  • A WAC wizard for creating Kubernetes clusters to run your containerized applications
  • PowerShell cmdlets for setting up Kubernetes and creating Kubernetes clusters to allow you to script the host setup and Kubernetes cluster creation

Let’s get started with deploying an AKS instance to our Azure Stack HCI cluster…

To start, I’m going to do this deployment using PowerShell but you can use the Windows Admin Center as well. Additionally, I’m deploying AKS to a 2 node Azure Stack HCI cluster that is deployed in Hyper-V but can be deployed to any of the following:

  • 2-4 node Azure Stack HCI Cluster
  • Windows Server 2019 Datacenter failover cluster
  • Single node Windows Server 2019 Datacenter

First, will need to download and install the AksHci PowerShell Module. This can be downloaded directly from Microsoft’s Azure Kubernetes Service on Azure Stack HCI registration page. The downloaded package contains the AksHci.Powershell.zip which has the PowerShell module. You will need to extract this zip file to %systemdrive%\program files\windowspowershell\modules on each of the Azure Stack HCI nodes.

Here is what the PowerShell Modules folder should look like on each of the nodes once extracted, adding the AksHci, Kva, Moc, and MK8SDownloadAgent modules added.

Next, we need to import the new AksHci module:

Upon import completion, close all open PowerShell windows and then prepare the nodes for deployment. We do this by running checks on every computer to see if all the requirements are statisfied to install AKS on Azure Stack HCI. From an Admininstrator PowerShell prompt, run the following command: Initialize-AksHciNode

You should see “Done” displayed in green text as you above. This means we are ready to configure your deployment using Set-AksHciConfig

This command has the following options but only imageDir, and cloudConfigLocation parameters are required on any multi-node deployments.


Although Microsoft states that only imageDir and CloudConfigLocation are required, in my deployment, I did have to specify the vnetName parameter as well. Note that the path to the directory for both imageDir and CloudConfigLocation on a multi-node deployment must point to a shared storage path such as C:\ClusterStorage\Volume01\Images.

The location needs to be on a highly available share so that the storage will always be accessible. It can also live on an SMB share, such as \\Fileshare\Images if preferred. In my deployment, I’m pointing to my HCI cluster volume as shown in the example below.

Once we have all the necessary configurations completed and validated, now it’s time to kick off our new AKS deployment. To do this, we install the AKS on Azure Stack HCI agents/services and the AKS Service Host by running the following command: Install-AksHci

We now have successfullly completed our installation of the AKS Service host so let’s verify that is fully deployed appropriately by running the command: Get-AksHciCluster

As you can see, we have successfully provisioned the host for my clustergroup-management…

In order to access your clusters using kubectl, we will need to get the kubeconfig by running the command: Get-AksHciCredential

It does require to parameters:

  • clusterName – name of the cluster, in this case “clustergroup-management”
  • outputLocation – the path to where you want the kubeconfig to saved to. Default is %USERPROFILE%\.kube

We will use the kubeconfig file later in this blog so keep it handy.

Now that we have AKS successfully deployed to our Azure Stack HCI cluster, we need to create our first Kubernetes cluster…To do this, we will use the New-AksHciCluster command. There are several parameters that can be used which I’ve provided below, however, only -clusterName is required. If you choose to just pass the required parameter, the number of Control Plane Nodes, Linux Nodes will default 1 while Windows Nodes will default to 0.

My first cluster deployment is using the default values but I did specify their parameters. I executed this command directly from my Windows 10 management machine but you can also deploy directly from the Azure Stack HCI hosts.


Let’s get a list of our deployed AKS host and Kubernetes clusters by running the following command: Get-AksHciCluster

If we want to scale a Kubernetes cluster, we can easily do this by running the command: Set-AksHciClusterNodeCount


We now have an AKS cluster fully deployed and operational with 1 Control Plane and 3 Linux worker nodes.

After AKS has been fully deployed, you can see your new kubernetes cluster in the Azure Stack HCI Cluster Manager view within Windows Admin Center. Click on the Azure Kubernetes Service in the lower left corner of the tools pane and this will present your new kubernetes cluster as seen below.

That’s it! You have successfully deployed Azure Kubernetes Service on Azure Stack HCI.

But wait…there’s more…Let’s walk through connecting your clusters to Azure Arc for Kubernetes.

When an AKS on Azure Stack HCI cluster is attached to Arc, it will appear in the Azure portal. Before we start, you will need to ensure that you have the following requirements ready:

  • An AKS on Azure Stack HCI cluster with at least 1 Linux worker node up that is up and running – should have that, we just completed it.
  • You’ll need your kubeconfig file that we generated earlier…This allows us to access the cluster and cluster-admin role on the cluster for deployment of Arc enabled Kubernetes agents.
  • Have the AKS on Azure Stack HCI Powershell module install – again, we have already done this so you should be ready to go.
  • Azure CLI version 2.3+ is required for install the Azure Arc Enabled Kubernetes CLI extensions. Install Azure CLI if not already or update to the latest version to ensure you are at the correct version.
  • You will need an Azure subscription that you own are at least a contributor
  • Run the below commands in a PowerShell Administrative window.

Login into Azure w/ Azure CLI

Register the two providers, below, for Azure Arc enabled Kubernetes:

You can check the registration status by running the following commands:

Next, create a resource group in Azure Public to hold the connected cluster resource. You can use an existing resource in East US or West Europe as well. If you want to create a new resource, use the command below:

Create a new service principal. It’s best practices to create with an informative name but you can also use an existing one if you would like. I’ve created a new one below. If you are using an existing Service Principal account, you will need the service principal’s appID, password and tenant values.

Using the values generated during the Service Principal, we kick off the Azure Arc onboarding using the command: Install-AksHciArcOnboarding


Once the above command completes successfully, the onboarding will continue. You can run the kubectl command above in the output or you can go to the Azure Portal to see if it has been completed. If it has completed the onboarding process, you will see the following in the Azure Arc for Kubernetes blade.

After Azure Arc has completed its onboarding, you can go back into Windows Admin Center Cluster Manager and under the Azure Kubernetes Service tools pane, you will now see that Azure Arc is tied to this cluster instance.

So, to recap, we have walked through deploying an Azure Kubernetes Service host on an Azure Stack HCI cluster. From there we created our first Kubernetes cluster and finally we tied it to Azure Arc in Azure Public.

I hope you found this information helpful. This is a preview and updates are constantly being made to the flow of the deployment so some steps may evolve over time.

In the next blog, we will discuss deploying Windows and Linux applications to your Kubernetes cluster – Stay tuned!