Container Service Extension 2.5 Installation: Part 2

Building on Part 1 of my series on installing VMware’s Container Service Extension 2.5.0, in this post, I’ll walk through the process of configuring a client server to interact with CSE via the vcd-cli tool. I’ll also walk through the process of onboarding a tenant as well as the workflow, from the tenant’s perspective, of provisioning and managing a Kubernetes cluster.

Configuring a CSE Client

Now that I’ve deployed my CSE server, I’ll need to utilize the the vcd-cli tool with the CSE client extension enabled in order to interact with the CSE service. For the client server, I am, again, utilizing a CentOS 7.6 server and a Python 3.7.3 virtual environment to install and utilize the vcd-cli tool in this walkthrough.

The first thing I’ll need to do is create and activate my virtual environment, which I will install in the ~/cse-client directory:

$ python3.7 -m virtualenv ~/cse-client
$ source ~/cse-client/bin/activate

Now I’m ready to install the vcd-cli tool. vcd-cli is a command line interface for VMware vCloud Director that allows system administrators and tenants to perform operations from the command line for convenience and automation. Use pip within the virtual environment to install vcd-cli and the Container Service Extension bits:

$ pip install vcd-cli
$ pip install container-service-extension

Now that I’ve installed vcd-cli, I’m going to attempt a login to my vCloud Director environment to create a profile at ~/.vcd-cli/profiles.yaml that we will eventually use to activate the CSE client extension:

$ vcd login director.vcd.zpod.io system administrator -iw
Password: 
administrator logged in, org: 'system', vdc: ''

Note: If you see a python traceback when attempting to log in to the vCloud Director environment that references ModuleNotFoundError: No module named '_sqlite3', you can disable the browsercookie feature by editing the following file within your virtual environment directory:

$ vi <virtual-env-directory>/lib/python3.7/site-packages/vcd_cli/browsercookie/__init__.py

and commenting out the following lines:

#try:
    # should use pysqlite2 to read the cookies.sqlite on Windows
    # otherwise will raise the "sqlite3.DatabaseError: file is encrypted or  is
    # not a database" exception
    #from pysqlite2 import dbapi2 as sqlite3
#except ImportError:
    #import sqlite3

After making the above changes, you should be able to successfully login via the vcd-cli tool.

Now that I have successfully logged in to the vCloud Director environment, I can enable the CSE client in my vcd-cli profile. I’ll use vi to edit my profile:

$ vi ~/.vcd-cli/profiles.yaml 

and add the following lines to the file between the active: and profiles: sections to enable the CSE client. Your file should look like the example below:

active: default
extensions:
- container_service_extension.client.cse
profiles:

---output omitted---

Now, I’ll run a cse command to test my connection to the CSE server from the client:

$ vcd cse system info
property              value
--------------------  ------------------------------------------------------
all_threads           6
config_file           /home/cse/config.yaml
consumer_threads      5
description           Container Service Extension for VMware vCloud Director
product               CSE
python                3.7.3
requests_in_progress  0
status                Running
version               2.5.0

Great!! So now I’ve configured a client to communicate with the CSE server via the CSE client extension for vcd-cli. Now, as the vCD system admin, I’m ready to onboard a new tenant for Kubernetes cluster provisioning via CSE.

Onboarding a Tenant

I’m ready to onboard my first tenant that is interested in deploying Kubernetes cluster in their vCD managed environments.

The first thing I’ll do is examine the Organizations and Organization Virtual Datacenters (OrgVDCs) available in my environment and what Kubernetes providers are assigned to those OrgVDCs, using the cse client:

$ vcd cse ovdc list
name                org                 k8s provider
------------------  ------------------  --------------
base-ovdc           base-org            none

As you can see, in my environment, I have a single org (base-org) and a single OrgVDC (base-ovdc). Currently, the k8 provider value for the OrgVDC is none, so tenants in the base-org can not use CSE to provision clusters.

In order to allow those users to provision clusters, I need to enable the OrgVDC to allow cluster provisioning. The two options for k8 provider are native or enterprise. native is for CSE Standard Kubernetes cluster provisioning while enterprise is used for CSE Enterprise (Enterprise PKS) Kubernetes cluster creation.

Note: These commands must be run as a vCD system administrator

First, I’ll need to instruct vcd-cli to “use” the base-org organization:

$ vcd org use base-org
now using org: 'base-org', vdc: 'base-ovdc', vApp: ''.

Then, as the system administrator, I can enable the base-ovdc to support CSE Standard Kubernetes cluster provisioning:

$ vcd cse ovdc enable base-ovdc --k8s-provider native
metadataUpdate: Updating metadata for Virtual Datacenter base-ovdc(dd7d117e-6034-467b-b696-de1b943e8664)
task: 05706a5a-0469-404f-82b6-559c078f855a, Updated metadata for Virtual Datacenter base-ovdc(dd7d117e-6034-467b-b696-de1b943e8664), result: success

I can now verify the OrgVDC metadata has been updated with the cse command below:

$ vcd cse ovdc list
name                org                 k8s provider
------------------  ------------------  --------------
base-ovdc           base-org            native

Awesome! Now my base-org tenant users have been granted the ability to deploy Kubernetes clusters in their OrgVDC.

A Note Regarding RBAC

If you remember back to Part 1 of my series, I enabled RBAC functionality on the CSE server to allow my tenant admins the ability to control who is able to create Kubernetes clusters in their organizations. Now that I, as the vCD system admin, have enabled the base-org tenant to support Kubernetes cluster creation, it is up to the base-org tenant admin to allow specific users within their org to create clusters.

I have written a detailed blog post for configuring RBAC functionality so I won’t rehash that here, but from a high level, I have performed the following actions in my environment to onboard users in the base-org as the base-org tenant admin:

  • Logged into vcd-cli as a base-org user with the Organizational Admin role
  • Assigned the "{cse}:CSE NATIVE DEPLOY RIGHT" right to a role in org
  • Assigned above role to any user I’d like to be able to deploy Kubernetes clusters via CSE

Now the users within the base-org tenant that have the proper permissions to provision Kubernetes clusters CSE. So let’s see it in action!!

Provisioning (And Managing) Kubernetes Clusters via CSE

For the last section of the post, I’m going to switch personas to a tenant user (cse-native-user) of the base-org within the vCD environment. I have been assigned the "{cse}:CSE NATIVE DEPLOY RIGHT" right by my organization admin and I’m ready to provision clusters.

First, I’ll use vcd-cli to log in to my organization within the vCD environment:

$ vcd login director.vcd.zpod.io base-org cse-native-user -iw
Password: 
cse-native-user logged in, org: 'base-org', vdc: 'base-ovdc'

Once logged in, I’ll use the cse client to examine which Kubernetes templates are available to me:

$ vcd cse template list
name                                    revision  is_default    catalog  
------------------------------------  ----------  ------------  --------- 
ubuntu-16.04_k8-1.15_weave-2.5.2               1  False         cse-25  

And now I’m ready to provision a cluster with the following command:

$ vcd cse cluster create test-cluster -t ubuntu-16.04_k8-1.15_weave-2.5.2 -r 1 \
--network outside --ssh-key ~/.ssh/id_rsa.pub --nodes 1

cluster operation: Creating cluster vApp 'test-cluster' (2ad4df27-a7fd-4a11-bf29-f9e18eea490b) from template 'ubuntu-16.04_k8-1.15_weave-2.5.2' (revision 1), 
cluster operation: Creating master node for test-cluster (2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
cluster operation: Initializing cluster test-cluster (2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
cluster operation: Creating 1 node(s) for test-cluster(2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
cluster operation: Adding 1 node(s) to test-cluster(2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
task: 8d302115-35ef-4566-a95c-f4f0000010e8, Created cluster test-cluster (2ad4df27-a7fd-4a11-bf29-f9e18eea490b), result: success

where -t is the template name, -r is the template revision number, --network is the OrgVDC network we will deploy the Kubernetes nodes on, --ssh-key is the public ssh key CSE will embed in the Kubernetes nodes to allow root access via ssh to the OS of the nodes, and --nodes is the number of worker nodes to be deployed in the cluster.

As you can see from the output of the command, the CSE server is essentially performing the following actions:

  • Creating a vApp in vCD with the cluster name specified in the cluster create command
  • Creating a Kubernetes master node utilizing the vApp template I installed during the CSE server deployment
  • Running post provisioning scripts on the master node to instantiate the VM as a master node
  • Creating a Kubernetes worker node utilizing the vApp template I installed during the CSE server deployment
  • Running post provisioning scripts on the worker node to add it into the cluster, under control of the master node

Once, I have received the final result: success message, I am ready to access my cluster! First, I’ll get some info about the cluster I just provisioned:

vcd cse cluster info test-cluster

property           value
-----------------  -------------------------------------------------------------------------------
cluster_id         2ad4df27-a7fd-4a11-bf29-f9e18eea490b
cse_version        2.5.0
k8s_provider       native
k8s_version        1.15
leader_endpoint    10.96.66.39
master_nodes       {'name': 'mstr-spxa', 'ipAddress': '10.96.66.39'}
name               test-cluster
nfs_nodes
nodes              {'name': 'node-a5i0', 'ipAddress': '10.96.66.43'}
number_of_vms      2
status             POWERED_ON
template_name      ubuntu-16.04_k8-1.15_weave-2.5.2
template_revision  1
vapp_href          https://director.vcd.zpod.io/api/vApp/vapp-17e81bd9-8995-4c4b-8965-1df9ae23e9f9
vapp_id            17e81bd9-8995-4c4b-8965-1df9ae23e9f9
vdc_href           https://director.vcd.zpod.io/api/vdc/d72b0350-9614-4692-a3b9-730c362036c6
vdc_id             d72b0350-9614-4692-a3b9-730c362036c6
vdc_name           base-ovdc

The cluster info command will give me information about cluster, including the IP addresses of the nodes, as well as the current state of the cluster, and template used to create said cluster, among other things.

Now, I’ve provisioned a cluster and I’m ready to deploy some applications!! First, I need to use CSE to obtain the cluster config file that will allow me to access the cluster via native Kubernetes tooling like kubectl:

$ vcd cse cluster config test-cluster > ~/.kube/config

The above command will grab the cluster config file from the master node of the test-cluster and pipe it into a file at the default location used by kubectl (`~/.kube/config) for cluster config files.

Now, I’ll verify connectivity to the cluster via kubectl:

$ kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
mstr-spxa   Ready    master   11m     v1.15.3
node-a5i0   Ready    <none>   8m15s   v1.15.3

Great, my cluster is up and running!! But I only deployed with 1 worker node… What if I want to add more? Do I have to redeploy? Nope!! CSE can add (and remove) worker nodes to existing clusters with the following command:

$ vcd cse cluster resize test-cluster --nodes 2 --network outside

where --nodes is the total number of worker nodes in the cluster. So in the example above, I added 1 additional worker nodes to my cluster because my original worker node count was 1.

Note: You will need to use -t and -r flags in the above command to specific the template and revision if you are not using the default template defined in the CSE server configuration file.

After performing all of my testing, I decided I’m going to delete my cluster with the following command:

$ vcd cse cluster delete test-cluster
Are you sure you want to delete the cluster? [y/N]: y

This command will delete the vApp that was created to house the cluster, which includes all components of the Kubernetes cluster. For additional information on managing Kubernetes clusters with CSE, refer to the product documentation.

Conclusion

Well if you’ve made it this far, congratulations!! I hope this walk through of installation and configuration of Container Service Extension 2.5.0 was informative. Keep an eye on the blog for more articles on Day 2 operations coming down the pipe!!

Leave a Reply

Your email address will not be published. Required fields are marked *