In Part 1 of my series on deploying Kubernetes clusters to VMware on AWS environments with ClusterAPI Provider vSphere, I detailed the processes required to stand up the CAPV management plane. After completing those steps, I am ready to provision a workload cluster to VMC using CAPV.
Creating Workload Clusters
The CAPV cluster is the brains of the operations but I still need to deploy some workload clusters for my teams of developers to deploy their applications onto. The management cluster helps automate the provisioning of all of the provider components to support my workload clusters as well as instantiating the VMs provisioned as a Kubernetes cluster. The basic use case here is that I, as the infrastructure admin, am responsible for utilizing the CAPV management cluster to provision multiple workload clusters that can support individual teams of developers, individual application deployments, etc. The CAPV management cluster allows me to easily deploy a consistent cluster in a repeatable fashion with very little manual effort. I can quickly deploy a test, dev, prod set of clusters for a team or deploy 5 different workload clusters for 5 different groups of developers.
Another usage pattern, and probably more aligned with the DevOps mentality, is to configure authentication to the Management cluster and use Kubernetes RBAC constructs to assign teams the ability to create workload clusters in their respective namespaces. This way, developers have full control over when and what they provision as long as it fits within the limitations established for them by the infrastructure team. True self service!!
In this exercise, I’m going to deploy a workload cluster named “prod-cluster”, composed of 1 master node and 4 worker nodes. I’ll start by using the same docker “manifests” image I used in Part 1 of the series, along with the same envvars.txt
file, to create the .yaml
file scaffolding for my workload cluster. Note, I’m using a different cluster name (prod-cluster
) so all of my config files will be stored in a new directory:
# docker run --rm \
-v "$(pwd)":/out \
-v "$(pwd)/envvars.txt":/envvars.txt:ro \
gcr.io/cluster-api-provider-vsphere/release/manifests:v0.5.4 \
-c prod-cluster
Just as in the management cluster example, the “manifests” docker image creates the .yaml
files I’ll need to create my workload cluster. The first thing I’ll do is create the Cluster resource using the cluster.yaml
file:
# kubectl apply -f ./out/prod-cluster/cluster.yaml
cluster.cluster.x-k8s.io/prod-cluster created
vspherecluster.infrastructure.cluster.x-k8s.io/prod-cluster created
In ClusterAPI terms, the Cluster resource defines cluster-wide configuration such as generic networking concepts like pod and service ranges or DNS domain.
Next, I am ready to create the control plane node with the controlplane.yaml
file. This file defines the configuration of a vSphere virtual machine as well as a kubeadm bootstrap config that instantiates the VM as a Kubernetes master node:
# kubectl apply -f ./out/prod-cluster/controlplane.yaml
kubeadmconfig.bootstrap.cluster.x-k8s.io/prod-cluster-controlplane-0 created
machine.cluster.x-k8s.io/prod-cluster-controlplane-0 created
vspheremachine.infrastructure.cluster.x-k8s.io/prod-cluster-controlplane-0 created
Note the output of the kubectl
command, which informs me that a kubeadm bootstrap config as well as a machine (virtual machine) was created. If I navigate to the VMC console, I can observe my control plane VM being created, using the CentOS template defined in the envvars.txt
file:
Now I’m ready to create my worker nodes for the cluster, which are defined by the machinedeployment.yaml
file. In ClusterAPI terms, a MachineDeployment is analogues to a Deployment in the Kubernetes world. MachineDeployments manage the desired state of a group of Machines (VMs) just as Deployments manage the desired state of pods.
Before I deploy my worker nodes, I need to up the replica count to 4 in the .yaml
file. This will ensure my MachineDeployment contains 4 worker nodes:
# vi ./out/prod-cluster/machinedeployment.yaml
...
spec:
replicas: 4
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: prod-cluster
...
Now I’m ready to create my worker nodes!
# kubectl apply -f ./out/prod-cluster/machinedeployment.yaml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/prod-cluster-md-0 created
machinedeployment.cluster.x-k8s.io/prod-cluster-md-0 created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/prod-cluster-md-0 created
Again, note the output of the kubectl
command which confirms that a bootstrap config has been created to instantiate these VMs as worker nodes and “join” them under the control of the existing control plane node to form a cluster. We can also confirm the Machines (VMs) have been created in the VMC console:
At this point, I now have a workload cluster that consists of 1 master and 4 worker nodes deployed in my VMC environment.
Accessing the Workload Cluster
Now I’ll need to obtain the kubeconfig
file that will allow me to interact with the workload cluster. The kubeconfig
file used to access workload clusters is stored as a Kubernetes Secret on the management cluster. The secrets are stored as <cluster-name>-kubeconfig
. I can confirm my prod-cluster
kubeconfig
is available with the following command:
kubectl get secrets
NAME TYPE DATA AGE
...
prod-cluster-kubeconfig Opaque 1 20m
...
I can also use the following command to decode the secret and place the plain text at ./out/prod-cluster/kubeconfig
for later use:
kubectl get secret prod-cluster-kubeconfig -o=jsonpath='{.data.value}' | \
{ base64 -d 2>/dev/null || base64 -D; } >./out/prod-cluster/kubeconfig
In order to start interacting with the prod-cluster
workload cluster, I’ll reset my KUBECONFIG
environment variable to the new kubeconfig
I just pulled out of the management cluster:
# export KUBECONFIG="$(pwd)/out/prod-cluster/kubeconfig"
Now I’ll use kubectl
to examine the nodes of my workload cluster:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
prod-cluster-controlplane-0 NotReady master 25m v1.16.3
prod-cluster-md-0-55f55ffdb9-4467d NotReady <none> 16m v1.16.3
prod-cluster-md-0-55f55ffdb9-b9hv8 NotReady <none> 16m v1.16.3
prod-cluster-md-0-55f55ffdb9-x9v7z NotReady <none> 16m v1.16.3
prod-cluster-md-0-55f55ffdb9-xljxw NotReady <none> 16m v1.16.3
Notice that I am now receiving information about my workload cluster (1 master/4 workers) instead of my management cluster (1 master). Great!
But I’m not done yet… Notice that the nodes are all in NotReady
state. This is because workload clusters do not have any add-ons applied aside from those added by kubeadm. Nodes in the workload clusters will be in the NotReady
state until I apply a [Container Network Interface][25] (CNI) add-on.
The “manifests” docker images automatically creates an addon.yaml
file that contains the configuration to instantiate [Calico] as the CNI for the workload cluster, but you can use any CNI you wish. For the sake of simplicity, I’m going to utilize the default Calico config provided:
kubectl apply -f ./out/prod-cluster/addons.yaml
configmap/calico-config created
...
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Note the Calico resources defined in the addons.yaml
file include some RBAC config as well as a Deployment and DaemonSet for the Calico components running on the cluster, among other things.
Now, I can verify my nodes have all transitioned into a Ready
state:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
prod-cluster-controlplane-0 Ready master 39m v1.16.3
prod-cluster-md-0-55f55ffdb9-4467d Ready <none> 29m v1.16.3
prod-cluster-md-0-55f55ffdb9-b9hv8 Ready <none> 29m v1.16.3
prod-cluster-md-0-55f55ffdb9-x9v7z Ready <none> 29m v1.16.3
prod-cluster-md-0-55f55ffdb9-xljxw Ready <none> 29m v1.16.3
VOILA!! Now I’m ready to start deploying my workloads to the cluster!!
Conclusion
This concludes Part 2 of my post on deploying Kubernetes Clusters to a VMware Cloud on AWS environment at scale with ClusterAPI Provider vSphere! CAPV is a very powerful tool that helps infrastructure admins provide their users with a consistent, scalable workflow for deploying and managing Kubernetes clusters on top of vSphere infrastructure.
Stay tuned to the blog for a more in depth look into how Cluster API is utilized in Project Pacific to provide self service Kubernetes cluster management natively within vSphere!