You'll install IoT Edge workloads on Kubernetes. For expediency, the cluster environment will be hosted in a single Azure VM with 3 docker containers emulating 3 Kubernetes nodes using the k3d tool.
You'll need Azure Cloud Shell for this lab with an active Azure subscription.
Perform the following steps in the Azure Cloud Shell environment.
az extension add --name azure-cli-iot-ext
Perform these steps in the cloud shell environment.
az iot hub create \
--resource-group iotlab-k8s-resources-$UNIQUESTRING \
--name iotlab-k8s-hub-$UNIQUESTRING \
--sku S1 \
--partition-count 2
az iot hub device-identity create \
--hub-name iotlab-k8s-hub-$UNIQUESTRING \
--device-id edge-k8s-device-$UNIQUESTRING \
--edge-enabled
Configure a sample set of inter-communicating modules as the workload to run on the device
# Download
wget https://aka.ms/iotsummit/sampleWorkload \
-O workload.json -q
# Set
az iot edge set-modules \
--hub-name iotlab-k8s-hub-$UNIQUESTRING \
--device-id edge-k8s-device-$UNIQUESTRING \
--content workload.json
Perform these steps in the cloud shell environment.
ssh-keygen -m PEM -t rsa -b 4096 -q -f ~/.ssh/id_k8s_lab -N ""
# Set environment variable
export CONNSTR=$(az iot hub device-identity show-connection-string \
--device-id edge-k8s-device-$UNIQUESTRING \
--hub-name iotlab-k8s-hub-$UNIQUESTRING \
-o tsv)
# Deploy Kubernetes in a VM
az group deployment create \
--name edgeVm \
--resource-group iotlab-k8s-resources-$UNIQUESTRING \
--template-uri "https://aka.ms/iotsummit/labK8sVmDeploy" \
--parameters location=$RGLOC \
--parameters dnsLabelPrefix=iotedge-k8s-vm-$UNIQUESTRING \
--parameters adminUsername='azureuser' \
--parameters deviceConnectionString=$CONNSTR \
--parameters authenticationType='sshPublicKey' \
--parameters adminPasswordOrKey="$(< ~/.ssh/id_k8s_lab.pub)" | \
jq .properties.outputs
ssh -i ~/.ssh/id_k8s_lab azureuser@iotedge-vm-xyz.westus2.cloudapp.azure.com
.watch kubectl get nodes
in the VM's bash shell and wait for it to report 3 nodes. It is expected you'll see "not found" initially.Ctrl+c
returns you to the VM's shell.
When deployed on Kubernetes, the IoT Edge runtime automagically translates the IoT Edge application deployment to Kubernetes primitives. As someone familiar with IoT Edge, there are minimal new concepts you'll need to learn.
Better yet, modules you developed for a single device will work without any changes on Kubernetes. For instance, the workload you set to run on Kubernetes is the very same you used in the Deploy an IoT Edge VM lab. Device workloads are installed in their own isolated namespace. Here, we installed the workload in a namespace called helloworld. Check if it's running
#
# Run this in the VM emulating the Kubernetes
# environment. 'k9s' is a tool to visually interact
# with the cluster.
#
k9s -n helloworld
In k9s explorer, hit SHIFT+:
and enter po
to view the pods that are hold workloads in Kubernetes. Each IoT Edge module runs in a Kubernetes pod. Notice them coming up just like in a single device, however on Kubernetes, they are coming up on different nodes. The Kubernetes scheduler actively moves workloads from unhealthy nodes to healthy ones thereby improving deployment resiliency.
Selecting a pod and hitting l
will show its logs (along with its sidecar proxy). Hitting Ctrl+k
with workload pod selected will kill it (don't pick the iotedged pod since it's not setup to persist state in this example 😁); notice the pod coming up again as Kubernetes drives the system to the desired state.
Hit SHIFT+:
and enter svc
to view the pods that are exposing services for other workloads (modules) in the namespace. If you want to go deeper down the rabbit hole, Ctrl-a
lists all supported aliases.
Ctrl+c
exits k9s.
You're done! In this lab you learned about running IoT Edge workloads on Kubernetes for improved resilience and worked with a couple of tools from the Kubernetes ecosystem.
Remember, https://aka.ms/edgek8sdoc has a bunch more information, advanced tutorials, and pretty architecture diagrams 😀