Hopefully you have heard this week’s announcements for Microsoft’s latest Operating System offering; Azure Stack HCI OS is a specialized virtualization operating system, deployed in an integrated system in a hyperconverged model, and delivered as an Azure hybrid service. Delivering as an Azure hybrid service allows the OS to receive the latest and greatest in up to date security, performance, and features that you are already used to when consuming Azure services. With Azure Stack HCI, you can deploy and run Windows and Linux virtual machines in your datacenter or at the edge, using your existing tools, processes, and skillsets while simultaneously extending your datacenter to the cloud, with the ability to integrate Azure capabilities such as Azure Backup, Azure Monitor, Azure Update Management, and Azure Security Center.
Today, we are going to talk about bringing the Azure Kubernetes Service (AKS) directly into Azure Stack HCI. This brings an on-premises implementation of AKS, which automates the management and orchestration of containerized applications at scale. Before we dig into what it looks like to deploy AKS on Azure Stack HCI, let’s first talk about why you would use Kubernetes and how it’s transforming the cloud and datacenter landscape. Of course you can manage a few containers manually using Docker or similar tools, but when you build for any scale beyond a lab, when applications make use of tens, hundreds, or thousands of containers, which is where the need for at-scale management and orchestration comes into play, and where Kubernetes shines.
Kubernetes, in essence, is an open-source orchestrator for automating container management at scale. AKS takes this tool and simplifies the management, deployment, and governance of it both in Azure Public Cloud and on-premises. Within Azure Stack HCI, AKS significantly simplifies and streamlines the deployment of a Kubernetes infrastructure by providing wizards for setting up the infrastructure and integrating it into Azure Stack HCI.
Some of the functionality offered by AKS during preview and initial launch includes:
- Deployment containerized applications at scale to a cluster of Virtual Machines, known as a Kubernetes cluster, running across the Azure Stack HCI cluster
- Automatic failover when a node in the Kubernetes cluster fails
- Deploy and manage both Linux and Windows-based containerized applications
- Schedule workloads
- Monitor health
- Scale up or down by adding or removing nodes to the Kubernetes cluster
- Manage networking
- Discover services
- Coordinate application upgrades
- Assign pods to cluster nodes with cluster node affinity
There are several important features that AKS brings to Azure Stack HCI that simplifies the process of setting up Kubernetes, including:
A Windows Admin Center (WAC) wizard for setting up Kubernetes and its dependencies; kubeadm, kubelet, and a Pod network add-on
A WAC wizard for creating Kubernetes clusters to run your containerized applications
PowerShell cmdlets for setting up Kubernetes and creating Kubernetes clusters to allow you to script the host setup and Kubernetes cluster creation
Let’s get started with deploying an AKS instance to our Azure Stack HCI cluster…
To start, I’m going to do this deployment using PowerShell but you can use the Windows Admin Center as well. Additionally, I’m deploying AKS to a 2 node Azure Stack HCI cluster that is deployed in Hyper-V but can be deployed to any of the following:
- 2-4 node Azure Stack HCI Cluster
- Windows Server 2019 Datacenter failover cluster
- Single node Windows Server 2019 Datacenter
First, will need to download and install the AksHci PowerShell Module. This can be downloaded directly from Microsoft’s Azure Kubernetes Service on Azure Stack HCI registration page. The downloaded package contains the AksHci.Powershell.zip which has the PowerShell module. You will need to extract this zip file to %systemdrive%\program files\windowspowershell\modules on each of the Azure Stack HCI nodes.
Here is what the PowerShell Modules folder should look like on each of the nodes once extracted, adding the AksHci, Kva, Moc, and MK8SDownloadAgent modules added.
Next, we need to import the new AksHci module:
Upon import completion, close all open PowerShell windows and then prepare the nodes for deployment. We do this by running checks on every computer to see if all the requirements are statisfied to install AKS on Azure Stack HCI. From an Admininstrator PowerShell prompt, run the following command: Initialize-AksHciNode
You should see “Done” displayed in green text as you above. This means we are ready to configure your deployment using Set-AksHciConfig
This command has the following options but only imageDir, and cloudConfigLocation parameters are required on any multi-node deployments.
Although Microsoft states that only imageDir and CloudConfigLocation are required, in my deployment, I did have to specify the vnetName parameter as well. Note that the path to the directory for both imageDir and CloudConfigLocation on a multi-node deployment must point to a shared storage path such as C:\ClusterStorage\Volume01\Images.
The location needs to be on a highly available share so that the storage will always be accessible. It can also live on an SMB share, such as \\Fileshare\Images if preferred. In my deployment, I’m pointing to my HCI cluster volume as shown in the example below.
Once we have all the necessary configurations completed and validated, now it’s time to kick off our new AKS deployment. To do this, we install the AKS on Azure Stack HCI agents/services and the AKS Service Host by running the following command: Install-AksHci
We now have successfullly completed our installation of the AKS Service host so let’s verify that is fully deployed appropriately by running the command: Get-AksHciCluster
As you can see, we have successfully provisioned the host for my clustergroup-management…
In order to access your clusters using kubectl, we will need to get the kubeconfig by running the command: Get-AksHciCredential
It does require to parameters:
- clusterName – name of the cluster, in this case “clustergroup-management”
- outputLocation – the path to where you want the kubeconfig to saved to. Default is %USERPROFILE%\.kube
We will use the kubeconfig file later in this blog so keep it handy.
Now that we have AKS successfully deployed to our Azure Stack HCI cluster, we need to create our first Kubernetes cluster…To do this, we will use the New-AksHciCluster command. There are several parameters that can be used which I’ve provided below, however, only -clusterName is required. If you choose to just pass the required parameter, the number of Control Plane Nodes, Linux Nodes will default 1 while Windows Nodes will default to 0.
My first cluster deployment is using the default values but I did specify their parameters. I executed this command directly from my Windows 10 management machine but you can also deploy directly from the Azure Stack HCI hosts.
Let’s get a list of our deployed AKS host and Kubernetes clusters by running the following command: Get-AksHciCluster
If we want to scale a Kubernetes cluster, we can easily do this by running the command: Set-AksHciClusterNodeCount
We now have an AKS cluster fully deployed and operational with 1 Control Plane and 3 Linux worker nodes.
After AKS has been fully deployed, you can see your new kubernetes cluster in the Azure Stack HCI Cluster Manager view within Windows Admin Center. Click on the Azure Kubernetes Service in the lower left corner of the tools pane and this will present your new kubernetes cluster as seen below.
That’s it! You have successfully deployed Azure Kubernetes Service on Azure Stack HCI.
But wait…there’s more…Let’s walk through connecting your clusters to Azure Arc for Kubernetes.
When an AKS on Azure Stack HCI cluster is attached to Arc, it will appear in the Azure portal. Before we start, you will need to ensure that you have the following requirements ready:
- An AKS on Azure Stack HCI cluster with at least 1 Linux worker node up that is up and running – should have that, we just completed it.
- You’ll need your kubeconfig file that we generated earlier…This allows us to access the cluster and cluster-admin role on the cluster for deployment of Arc enabled Kubernetes agents.
- Have the AKS on Azure Stack HCI Powershell module install – again, we have already done this so you should be ready to go.
- Azure CLI version 2.3+ is required for install the Azure Arc Enabled Kubernetes CLI extensions. Install Azure CLI if not already or update to the latest version to ensure you are at the correct version.
- You will need an Azure subscription that you own are at least a contributor
- Run the below commands in a PowerShell Administrative window.
Login into Azure w/ Azure CLI
Register the two providers, below, for Azure Arc enabled Kubernetes:
You can check the registration status by running the following commands:
Next, create a resource group in Azure Public to hold the connected cluster resource. You can use an existing resource in East US or West Europe as well. If you want to create a new resource, use the command below:
Create a new service principal. It’s best practices to create with an informative name but you can also use an existing one if you would like. I’ve created a new one below. If you are using an existing Service Principal account, you will need the service principal’s appID, password and tenant values.
Using the values generated during the Service Principal, we kick off the Azure Arc onboarding using the command: Install-AksHciArcOnboarding
Once the above command completes successfully, the onboarding will continue. You can run the kubectl command above in the output or you can go to the Azure Portal to see if it has been completed. If it has completed the onboarding process, you will see the following in the Azure Arc for Kubernetes blade.
After Azure Arc has completed its onboarding, you can go back into Windows Admin Center Cluster Manager and under the Azure Kubernetes Service tools pane, you will now see that Azure Arc is tied to this cluster instance.
So, to recap, we have walked through deploying an Azure Kubernetes Service host on an Azure Stack HCI cluster. From there we created our first Kubernetes cluster and finally we tied it to Azure Arc in Azure Public.
I hope you found this information helpful. This is a preview and updates are constantly being made to the flow of the deployment so some steps may evolve over time.
In the next blog, we will discuss deploying Windows and Linux applications to your Kubernetes cluster – Stay tuned!
Microsoft recently announced the next iteration to the Azure VMWare Solution, which aims to make it easier and cheaper to allow migration of on-Premise VMware applications to Microsoft’s Azure cloud. This next generation version, is now available in Preview for US East and West Europe regions, targeting 2nd half of 2020 for general availability.
The new version of the solution enables customers to extend beyond their datacenter or a complete migration of on-Prem VMware apps to Azure without the need to re-architect the applications as they have traditionally required. This, in turn, provides cost savings, lowers effort and risk to get VMWare application workloads to Azure.
Under the covers, the Azure VMWare Solution gives customers the ability to use the same VMWare foundation, VMWare Cloud Foundation (VCF), in Azure as they do today in their private datacenters. This is a first-party Azure Service that’s built and supported by Microsoft and endorsed by VMWare. Since it’s a native implementation of VMware’s VCF it benefits customers to move to Azure without having to learn anything new, plus they are able to get Microsoft’s hybrid use benefit option as well.
Before we jump into the use cases that this solution would target, let’s talk about what VMware Cloud Foundation (VCF) is and how it helps with orchestration and a customer’s cloud journey, especially in a multi-cloud scenario. VCF is a solution that VMWare developed that provides an integrated stack which bundles Compute Virtualization (VMware vSphere), Storage Virtualization (VMware vSAN), Network Virtualization (VMware NSX) and Cloud Management and Monitoring (VMware vRealize Suite) into a single platform that can be deployed on-premises where a customer is responsible for managing the physical infrastructure as well as the virtual machine workloads or be run in an As-A-Service offering in public cloud where the cloud provider, such as Azure, AWS or GCP, manages the underlying physical infrastructure and the customer responsibility is to manage the virtual machine workloads.
Cloud Foundation introduced a new abstraction known as Workload Domains which consist of one or more vSphere clusters that are provisioned by SDDC Manager. These Workload Domains are resource containers with specific policies configured to handle performance, availability and security.
This framework provides a standard architecture based on VMware Validated Designs with the complete lifecycle automation in mind, providing a clear path to a modernized hybrid environment.
Dell Technologies has the longest running modernized datacenter strategy with a broad portfolio of solutions to meet this. One proven solution is VCF on VxRail which delivers an experience you won’t find from any other infrastructure running VCF today. Building upon native VxRail and Cloud Foundation capabilities that bring unique capabilities with unique integrations jointly engineered between Dell Technologies and VMware will simplify, streamline and automate the operations of your entire SDDC from Day 0 through Day 2 operations.
Having on-Prem integrated solutions is a key component of a modernized infrastructure today, and Dell Technologies provides the necessary components to support a strong Hyper-Converged datacenter, but that’s only a piece. What about the Public Cloud? Well, deploying and managing VMware solutions have been available for a while with offerings from Amazon Web Services, like “VMware Cloud on AWS” as well as Microsoft announcing support for 3rd-party solutions in April, 2019 from CloudSimple and Virtustream. However, now Microsoft and VMware have formed a strong partnership to deliver a true 1st-party Azure service that allow enterprise customers run their VMware technology, including VMware vSphere, HCX, NSX-T, and vSAN, on Azure without the cost, effort or risk associated with having to re-architect applications.
Wow…so you can now run native VMware workloads in Azure, but what are some scenarios that you could use for this?
Let’s look at just some of the possibilities…
- Reduce Datacenter footprint through consolidation and/or retirement
- Expand Datacenter operations, seamlessly and elastically for on-demand short periods of time to handle capacity constraints on-Prem
- Disaster Recovery / Business Continuity, using a VMware stack deployed in Azure as a primary or secondary disaster recovery site for your on-Prem VMware-based workloads
- Application Modernization that allows you to tap into the expanding Azure ecosystem to develop more cloud-native applications in a controlled way without having to abandon or rebuild or your VMware-based environments.
As mentioned previously, Azure VMware solution is in preview but you can review pricing for this new service here.
How Does Azure Stack Hub or HCI fit into this picture?
Today, there is no path to allow Azure VMware Solutions services to run in an Azure Stack Hub or HCI instance as this service is only available to Azure Public scenarios where the VMware workloads will run in Azure in a colocation scenario and then be directly managed by Microsoft. However, as these applications begin to modernize and take advantage of native cloud architectures, it would be possible to deploy those applications to local instances of Azure Stack Hub.