I am thrilled to announce that the Container Service Extension version 2.6.0 for VMware Cloud Director is generally available, please see release notes here. I wanted to take some time to go over some of the new features introduced with CSE 2.6.0, including the much anticipated VCD UI plugin, which allows providers to enable tenants to deploy CSE Kubernetes clusters via the VCD tenant portal!!
In this post, I’ll walk through some of the new features introduced in the CSE 2.6.0 release as well as step through the installation of the CSE UI Plugin for VCD.
New CSE Standard Templates
CSE uses customized VM templates (Kubernetes templates) as building blocks for deployment of Kubernetes clusters in VCD. Templates vary by guest OS (PhotonOS or Ubuntu) and have a defined version of Kubernetes, docker, and the Weave CNI plugin. Each template name is uniquely constructed based on the flavor of guest OS, Kubernetes version, and the Weave software version. These templates are maintained by VMware and reside in an official location hosted at a remote repository URL that in configured by default in the sample CSE config file generated during the CSE server install.
For CSE 2.6.0, VMware has introduced 2 new Ubuntu based templates as well as updated a few existing templates with security patches and an updated docker version. The 2 new Ubuntu templates bring along new version of Kubernetes for CSE Standard clusters: 1.16.6 and 1.17.2. You can view this (and all future) template announcements on the CSE documentation page.
In-Place Kubernetes Upgrades
CSE 2.6.0 also introduces the ability to upgrade CSE Standard Kubernetes clusters in place. Kubernetes is released on a very aggressive and fast paced cycle; a new minor (1.xx) release every three months! There are also numerous security patches and fixes that can be released in between those minor releases so having a method to perform in-place upgrades to CSE Kubernetes clusters is crucial. With CSE 2.6.0, support has been added to support in place upgrades of the following components of CSE Standard Kubernetes clusters:
- Kuberenetes components (kube-server, kubelet, kubedns, etc.)
- docker engine
- Weave CNI plugin
I will be writing a follow up post to walk through the process required for tenants to upgrade their existing clusters with CSE.
Secure Server Configuration Files
The CSE server utilizes a configuration file that contains user credentials for VCD, vSphere, and NSX-T/PKS (if utilizing CSE Enterprise). Previously, the configuration file was stored on the CSE server in plain text. With the release of CSE 2.6.0, VMware has added support for utilizing encrypted configuration files to power the CSE server.
Starting with this release, CSE server commands will only accept encrypted configuration files by default. CSE uses the industry standard symmetric encryption algorithm Fernet to secure the configuration files and the encryption is set using a user defined password. I will be authoring a follow-up post on installing (or upgrading to) CSE 2.6.0 that will detail the encryption workflow for the CSE server configuration file.
Introducing the CSE UI Plugin for VMware Cloud Director
Arguably, one of the most requested feature enhancements for CSE was the ability for providers to allow tenants to provision CSE Kubernetes clusters natively via the VCD tenant portal. With CSE 2.6.0, VMware has introduced the CSE UI plugin that will allow tenants to provision and manage their CSE Kubernetes clusters directly from the VCD tenant portal.
Provider admins will install the CSE plugin in VCD and decide which tenants have access to the plugin. At that point, if RBAC is enabled via the CSE server configuration file, tenant org admins will still control which users have access to provision clusters via the plugin by assigning the CSE custom rights bundles to users’ role.
Installing and Configuring the CSE UI Plugin
The installation and configuration process is fairly straight forward. After a successful installation of CSE Server 2.6.0+, the provider admin can pull the CSE UI plugin binary from the following link. Admins can install the plugin using the CSE server CLI locally on the CSE server OR via the VCD Provider Admin portal. I’ll walk through the workflow of installing the plugin via the Provider Admin portal below.
Installing the CSE UI Plugin via the VCD Provider Admin Portal
After downloading the CSE UI binaries, a system admin user should log in to the Provider Admin portal and navigate to the Customize Portal
page:
Once on the Customize Portal
page, select the Upload button to begin the installation of the plugin. For Step 1, click the Select Plugin File button and upload the downloaded plugin binary. Review information about the plugin and click Next to move on to the next stage of installation:
On Step 2, decide which tenants get access to the plugin via the tenant portal. In my example below, I am only granting access to the base-org
and cse-demo-org.
I do not want users in the temp-org
to have access to the UI plugin:
For Step 3, review the configuration and click Finish to install the plugin:
At this point, the UI plugin is installed and configured for your tenants to use!
Using the CSE Plugin UI to Provision a Cluster
Now that the provider has enabled the CSE UI plugin, tenants in enabled orgs are able to provision CSE Kubernetes clusters via the VCD tenant portal.
First, I’m going to log in to the “base-org” tenant portal as the “base-admin” user, who is an org admin with the ability to provision CSE clusters. Next, I’ll select the “Kubernetes Container Clusters” option from the dropdown menu:
On the plugin landing page, I can see all of my existing clusters and well as create new cluster. Click the Add” button to bring up the UI dialogue to create a new cluster. On Step 1, I’ll select the OrgVDC that I’m deploying my cluster to:
On the next step, I will fill out all of the required information for the cluster configuration. This includes the number of worker nodes, the CPU/memory configuration of the nodes, the VCD storage profile to use, and the public SSH key to install on the nodes in case we would like the ability to SSH to the VMs for troubleshooting/customization purposes. We can also choose the NFS
option if we’d like to install VM the offer NFS storage resources to the cluster. If the Rollback
option is enabled, this will automatically delete the VMs provisioned to support the cluster if the creation process fails:
On the next 2 steps, we will choose the OrgVDC network to use for the Kubernetes node VMs as well as the Kubernetes template used to create the cluster:
Finally, we can review the cluster config and click Finish to provision the cluster:
After about 15 minutes, I can navigate to the landing page of my cluster, view information about my cluster as well as downloading the kubeconfig
file so I can access my Kubernetes cluster via kubectl
!
Conclusion
In this post, I reviewed the new features and functionality introduced by the CSE 2.6.0 release as well as given a detailed walkthrough covering the installation, configuration, and usage of the new CSE UI plugin for VCD.
Stay tuned for some follow-up posts on upgrading from CSE 2.x to 2.6.0 as well as workflows for performing in place upgrades of CSE Standard Kubernetes clusters via CSE.