So you spent a week putting together a two part blog post for how to deploy clusters using Cluster API Provider vSphere. You feel pretty good about yourself, right? Well guess what, a new version of CAPV is right around the corner so you better update that blog post!! Well that’s why we’re here, things move fast in the world of Kubernetes…
With the release of Cluster API v1alphav3, the CAPV team has also released a new build of CAPV (0.6.0) with support for v1alpha3. You can review all of the changes from alpha2 to alpha3 here but the main change we’ll look at in this blog post is the creation of the management cluster and workload clusters with clusterctl
and how that is different in v1alpha3.
In my previous series of posts on using CAPV to deploy Kubernetes clusters to vSphere environments, I specifically dealt with some of the requirements to support this type of deployment in VMware Cloud on AWS. I won’t be rehashing all of that in this post so feel free to refer to the original posts if you’d like to learn the specifics of deploying clusters to VMC with CAPV.
Prereqs
Feel free to refer to the Install Requirements section of the “Getting Started” guide for more clarity on the requirements for deploying clusters using CAPV. Not much has changed from v1alpha2 but do note at the time of writing this post that DHCP is required to assign IP addresses to Kubernetes nodes as they are provisioned.
Also, please ensure you import both the CAPV machine image OVA, which will be used to support the Kubernetes nodes, as well as the HA proxy OVA, which will allow you to deploy multi-master clusters with CAPV!
Installing the Management Cluster
In v1alpha2, we used clusterctl
to instantiate a KinD cluster for bootstrapping, install all of the CAPV components within the KinD cluster, use that KinD cluster to provision a management cluster on the target infrastructure and then “pivot” all of the CAPV management components from the KinD cluster to the management cluster on the target infrastructure. In v1alpha3, we still use clusterctl
to deploy the management cluster, but instead of performing a “pivot” we actually just instantiate the management cluster initially using the clusterctl init
command. There is no longer a bootstrap cluster. The clusterctl init
command assumes there is some existing Kubernetes cluster available to be utilized as the management cluster. You can either use an existing Kubernetes cluster or revert to using KinD for testing, which is what I will do in this post.
As mentioned above, I am going to use a KinD cluster as my CAPV management cluster for this post, so the first thing I need to do after ensuring I’ve installed all of the required software on my client station is to provision my KinD cluster:
$ kind cluster create
This will create a single node KinD cluster and place the kubeconfig file at ${HOME}/.kube/config
if the $KUBECONFIG environment variable is not set.
Another slight change in the v1alpha3 management cluster installation workflow is where the vSphere provider credentials are stored. Now, the vSphere provider credentials are provided at ~/.cluster-api/clusterctl.yaml
See an example file below, which utilizes the centos
CAPV template:
VSPHERE_USERNAME: "cloudadmin@vmc.local"
VSPHERE_PASSWORD: "SuperSecretPassword"
VSPHERE_SERVER: "10.10.10.10"
VSPHERE_DATACENTER: "SDDC-Datacenter"
VSPHERE_DATASTORE: "WorkloadDatastore"
VSPHERE_NETWORK: "sddc-k8-jomann"
VSPHERE_RESOURCE_POOL: "*/Resources"
VSPHERE_FOLDER: "/SDDC-Datacenter/vm/Workloads/mannimal-k8s"
VSPHERE_TEMPLATE: "centos-7-kube-v1.17.3-temp"
VSPHERE_HAPROXY_TEMPLATE: "capv-haproxy-v0.6.0-rc.2-temp"
VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..."
Note: Currently, clusterctl
does not support optional template variables (“VSPHERE_SSH_AUTHORIZED_KEY” and “VSPHERE_FOLDER” are optional) so all variables listed above must be defined to proceed. This will be resolved in a future release of clusterctl
.
After creating the required clusterctl.yaml
file with access credentials, all you need to do to create your management cluster is run the following command:
$ clusterctl init --infrastructure vsphere
...
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -
That’s it! Now I have a functioning CAPV management cluster and I’m ready to deploy my first workload cluster!
Deploying Workload Clusters
Now that my management cluster is up and running, I’ll need to use clusterctl
to help me easily build a .yaml
file that defines my workload cluster. I’ll use the command below:
clusterctl config cluster v3-workload \
--infrastructure vsphere \
--kubernetes-version v1.17.3 \
--control-plane-machine-count 3 \
--worker-machine-count 3 > v3-workload.yaml
This command will create a .yaml
file that contains the definition for a workload cluster based on Kubernetes version v17.7.3, with 3 master nodes and 3 worker nodes that will deploy in the default
namespace of the management cluster. Feel free to review (or edit) the created .yaml
file to understand what type of resources will be created when you deploy a workload cluster.
At this point, I’m ready to use kubectl
to pass the v3-workload.yaml
file to the management cluster to kick off the creation of my workload cluster:
# k apply -f v3-workload.yaml
cluster.cluster.x-k8s.io/v3-workload created
haproxyloadbalancer.infrastructure.cluster.x-k8s.io/v3-workload created
vspherecluster.infrastructure.cluster.x-k8s.io/v3-workload created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/v3-workload created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/v3-workload created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/v3-workload-md-0 created
machinedeployment.cluster.x-k8s.io/v3-workload-md-0 created
As you can see, a handful of resources are created that comprise our workload cluster, including a haproxyloadbalancer,
which will be the first resource deployed by the management cluster, as I can observe in the vSphere Web UI:
After the LB VM is created and instantiated as a load balancer, the first control plane/master VM is clone and created:
Once the first control plane VM is instantiated as the master node and “hooked in” to the HA Proxy load balancer, the worker nodes (controled by the machinedeployment
resource) are deployed:
These worker nodes are deployed, instantiated as Kubernetes worker nodes, and registered with the existing master. Once this process is finished, the remaining 2 master nodes are provisioned and registered as master nodes in the cluster. Now I can see my cluster currently contains 7 total VMs (1 LB, 3 masters, 3 workers) all with the v3-workload
prefix:
I can also verify these resources with kubectl
when run against the management cluster:
$ kubectl get cluster
NAME PHASE
v3-workload Provisioned
$ kubectl get haproxyloadbalancer
NAME AGE
v3-workload 14m
$ kubectl get machinedeployment
NAME PHASE REPLICAS AVAILABLE READY
v3-workload-md-0 ScalingUp 3
$ kubectl get kubeadmcontrolplane
NAME READY INITIALIZED REPLICAS READY REPLICAS UPDATED REPLICAS UNAVAILABLE REPLICAS
v3-workload true 3 3 3
$ kubectl get machines
NAME PROVIDERID PHASE
v3-workload-5qrkm vsphere://4232e09c-c3e8-d57a-2b8c-0d0a7271387b Running
v3-workload-7cq87 vsphere://4232bf91-c0db-e90d-d855-b76ead0c0d1b Running
v3-workload-dnp2n vsphere://42329d4f-4dd4-f8cc-d042-d187304059dd Running
v3-workload-md-0-8d98957bb-4rbs7 vsphere://42328892-9c43-9053-f5e1-bf499549ab9e Running
v3-workload-md-0-8d98957bb-kh6fq vsphere://4232b7d6-7bad-31ba-9683-f5d5edcb967c Running
v3-workload-md-0-8d98957bb-wzgvv vsphere://423294cc-3b06-40aa-a31b-3996b52f0421 Running
Now that my infrastructure has been created, I need to access the new cluster with a kubeconfig
file. The kubeconfig
files for workload clusters are stored as secrets on the management cluster. I can run the following command to pull down the decrypted kubeconfig
file for my v3-workload
cluster:
$ kubectl get secret/v3-workload-kubeconfig -o json \
| jq -r .data.value \
| base64 --decode \
> ~/.kube/v3-workload
Note, this command places the kubeconfig
file in the ~/.kube
directory. My personal workflow includes maintaining separate kubeconfig
files for all my clusters and using ktx to navigate between config files/clusters.
Now that I have my workload cluster’s kubeconfig
file, I’m going to use ktx
to set my cluster config and I’ll be ready to start interacting with my workload cluster:
$ ktx v3-workload
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
v3-workload-5qrkm NotReady master 14m v1.17.3
v3-workload-7cq87 NotReady master 19m v1.17.3
v3-workload-dnp2n NotReady master 16m v1.17.3
v3-workload-md-0-8d98957bb-4rbs7 NotReady <none> 16m v1.17.3
v3-workload-md-0-8d98957bb-kh6fq NotReady <none> 16m v1.17.3
v3-workload-md-0-8d98957bb-wzgvv NotReady <none> 16m v1.17.3
Great!! Now I can observe my 3 master and 3 control plane nodes but I also notice that they are in the NotReady
status. That is because CAPI (and CAPV, by extension) does not automatically deploy a Container Network Interface (CNI) for workload clusters. Feel free to deploy any CNI you see fit but I’m going to use Calico in my cluster. I can simply deploy Calico by running the following kubectl
command:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
After a couple of seconds, I can check my nodes again to make sure they’ve switched to the READY
state:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
v3-workload-5qrkm Ready master 18m v1.17.3
v3-workload-7cq87 Ready master 23m v1.17.3
v3-workload-dnp2n Ready master 20m v1.17.3
v3-workload-md-0-8d98957bb-4rbs7 Ready <none> 20m v1.17.3
v3-workload-md-0-8d98957bb-kh6fq Ready <none> 20m v1.17.3
v3-workload-md-0-8d98957bb-wzgvv Ready <none> 20m v1.17.3
And there we have it! I’m ready to start deploy applications to my workload cluster!
Conclusion
So that about covers it. As I said initially, I wanted to take some time to look at the new workflow for deploying the management cluster in CAPV v1alpha3 as compared to the workflow I covered in my previous post on CAPV v1alpha2. Thanks for taking the time to follow along and feel free to reach out in the comments with any follow up questions or comments!