Exploring the Nirmata Kubernetes Extension for VMware Cloud Director

If you’ve been following my blog, you know that a lot of the content I publish focuses on VMware’s Container Service Extension and it’s integration with VMware Cloud Director, which allows service providers to create a Kubernetes-as-a-Service experience for their tenants utilizing their existing VCD-managed infrastructure.

Recently, myself and my colleague at VMware, Daniel Paluszek partnered with Nirmata to perform some testing on their new Kubernetes Extension for VMware Cloud Director. The Nirmata Kubernetes Extension for VCD builds on the rich UI experience already present in the VCD tenant portal by providing a workflow for provisioning Kubernetes clusters via CSE using the native UI.

The Native CSE Experience

As I’ve written about in my previous posts on CSE, once a service provider enables a tenant to provision Kubernetes clusters via CSE, tenants will use the vcd-cli with a CSE extension enabled to provision and manage Kubernetes clusters. For example, a tenant would log in to their VCD Org through the vcd-cli and issue the following command to create a Kubernetes cluster via CSE:

$ vcd cse cluster create k8-cluster-1 --network outside --nodes 1

where k8-cluster-1 is the name of the cluster, --network is the OvDC network the cluster will nodes will utilize, and --nodes 1 defines the number of worker nodes the cluster will contain.

While many users are familiar enough with a CLI to adapt to this method of resource provisioning, one piece of feedback we get from our partner community is that they’d like to offer a native UI experience in the tenant portal to allow their end customers to more intuitively provision Kubernetes clusters via VCD. That’s where the Nirmata Kubernetes Extension for VCD comes in…

Utilizing the Nirmata Kubernetes Extension

The Nirmata Kubernetes Extension for VMware Cloud Director is a custom extension created by Nirmata in partnership with the VMware Cloud Director team. The extension is comprised of a VCD UI extension as well as a Nirmata Server, deployed as a docker container, that passes communication between the UI elements, the CSE server, and the Nirmata SaaS platform. Daniel and I put together a detailed write up over at the Nirmata blog so I won’t go too deep in this blog post but wanted to walk through the experience of utilizing the service in the tenant portal.

After a Cloud Admin has onboarded a tenant in CSE and enabled the Nirmata Kubernetes Extension for their org, a tenant will see the Kubernetes option in their tenant portal menu:

After navigating to the Kubernetes page in the tenant portal, they can observe various information about the number of clusters, nodes, and pods deployed in the org. By selecting the Clusters option in the left hand, they are taken to a page that contains information about existing clusters as well as options to provision new clusters or register existing clusters with the extension.

As we can see from the screenshot above, our cse-standard-admin VCD user has already got a handful of clusters deployed in the environment. But what about a cluster that was provisioned outside of the UI? Can we still “see” that within the extension without redeploying? We sure can! We can click the Register button and register the existing cluster. This action communicates with the Nirmata server to deploy the Nirmata controller pod to the cluster to feed information about the cluster back to the UI for visibility:

After the cluster has been registered, we can select the cluster and observe a wealth of information about the cluster itself natively in the UI:

Nirmata also surfaces the idea of “add-ons,” or curated applications, that tenants can deploy directly to their clusters from the UI:

Service Providers can utilize applications curated by the Nirmata team as well as adding their own custom deployments. To take it a step further, Service Providers can create profiles that contain a set of add-ons that will be deployed to a cluster automatically on provisioning.

As far as interacting with existing clusters goes, tenants can also scale clusters in the tenant portal as well, via the extension:

So tenants can managed existing clusters deployed by CSE, what about provisioning net-new workloads? Tenants can visit the Cluster page of the UI extension and select the Create button and provision a Kubernetes cluster with a couple of clicks!!

The tenant defines information such as OvDC, OvDC network, storage policy, and worker node count and Nirmata and CSE handle the rest! In my humble opinion, this a game changer for the service provider community already invested in VCD. By installing and configuring CSE and the Nirmata Kubernetes Extension, they have the foundation in place to build an advanced Kubernetes-as-a-Service offering for their tenants to consume.

Conclusion

Nirmata has done some great work in conjunction with the VMware Cloud Director team to bring Kubernetes cluster provisioning and management directly into the tenant portal of VCD. As I said earlier, Daniel and I collaborated on a more detailed write-up on the Nirmata Kubernetes Extension for VCD that is hosted on the Nirmata blog. We also put together a video walkthrough of the extension, which you can view below:

Feel free to reach out to myself, Daniel or the Nirmata team for any additional feedback or questions around the Nirmata Kubernetes Extension for VCD. Thanks for the read!

Container Service Extension 2.5 Installation: Part 2

Building on Part 1 of my series on installing VMware’s Container Service Extension 2.5.0, in this post, I’ll walk through the process of configuring a client server to interact with CSE via the vcd-cli tool. I’ll also walk through the process of onboarding a tenant as well as the workflow, from the tenant’s perspective, of provisioning and managing a Kubernetes cluster.

Configuring a CSE Client

Now that I’ve deployed my CSE server, I’ll need to utilize the the vcd-cli tool with the CSE client extension enabled in order to interact with the CSE service. For the client server, I am, again, utilizing a CentOS 7.6 server and a Python 3.7.3 virtual environment to install and utilize the vcd-cli tool in this walkthrough.

The first thing I’ll need to do is create and activate my virtual environment, which I will install in the ~/cse-client directory:

$ python3.7 -m virtualenv ~/cse-client
$ source ~/cse-client/bin/activate

Now I’m ready to install the vcd-cli tool. vcd-cli is a command line interface for VMware vCloud Director that allows system administrators and tenants to perform operations from the command line for convenience and automation. Use pip within the virtual environment to install vcd-cli and the Container Service Extension bits:

$ pip install vcd-cli
$ pip install container-service-extension

Now that I’ve installed vcd-cli, I’m going to attempt a login to my vCloud Director environment to create a profile at ~/.vcd-cli/profiles.yaml that we will eventually use to activate the CSE client extension:

$ vcd login director.vcd.zpod.io system administrator -iw
Password: 
administrator logged in, org: 'system', vdc: ''

Note: If you see a python traceback when attempting to log in to the vCloud Director environment that references ModuleNotFoundError: No module named '_sqlite3', you can disable the browsercookie feature by editing the following file within your virtual environment directory:

$ vi <virtual-env-directory>/lib/python3.7/site-packages/vcd_cli/browsercookie/__init__.py

and commenting out the following lines:

#try:
    # should use pysqlite2 to read the cookies.sqlite on Windows
    # otherwise will raise the "sqlite3.DatabaseError: file is encrypted or  is
    # not a database" exception
    #from pysqlite2 import dbapi2 as sqlite3
#except ImportError:
    #import sqlite3

After making the above changes, you should be able to successfully login via the vcd-cli tool.

Now that I have successfully logged in to the vCloud Director environment, I can enable the CSE client in my vcd-cli profile. I’ll use vi to edit my profile:

$ vi ~/.vcd-cli/profiles.yaml 

and add the following lines to the file between the active: and profiles: sections to enable the CSE client. Your file should look like the example below:

active: default
extensions:
- container_service_extension.client.cse
profiles:

---output omitted---

Now, I’ll run a cse command to test my connection to the CSE server from the client:

$ vcd cse system info
property              value
--------------------  ------------------------------------------------------
all_threads           6
config_file           /home/cse/config.yaml
consumer_threads      5
description           Container Service Extension for VMware vCloud Director
product               CSE
python                3.7.3
requests_in_progress  0
status                Running
version               2.5.0

Great!! So now I’ve configured a client to communicate with the CSE server via the CSE client extension for vcd-cli. Now, as the vCD system admin, I’m ready to onboard a new tenant for Kubernetes cluster provisioning via CSE.

Onboarding a Tenant

I’m ready to onboard my first tenant that is interested in deploying Kubernetes cluster in their vCD managed environments.

The first thing I’ll do is examine the Organizations and Organization Virtual Datacenters (OrgVDCs) available in my environment and what Kubernetes providers are assigned to those OrgVDCs, using the cse client:

$ vcd cse ovdc list
name                org                 k8s provider
------------------  ------------------  --------------
base-ovdc           base-org            none

As you can see, in my environment, I have a single org (base-org) and a single OrgVDC (base-ovdc). Currently, the k8 provider value for the OrgVDC is none, so tenants in the base-org can not use CSE to provision clusters.

In order to allow those users to provision clusters, I need to enable the OrgVDC to allow cluster provisioning. The two options for k8 provider are native or enterprise. native is for CSE Standard Kubernetes cluster provisioning while enterprise is used for CSE Enterprise (Enterprise PKS) Kubernetes cluster creation.

Note: These commands must be run as a vCD system administrator

First, I’ll need to instruct vcd-cli to “use” the base-org organization:

$ vcd org use base-org
now using org: 'base-org', vdc: 'base-ovdc', vApp: ''.

Then, as the system administrator, I can enable the base-ovdc to support CSE Standard Kubernetes cluster provisioning:

$ vcd cse ovdc enable base-ovdc --k8s-provider native
metadataUpdate: Updating metadata for Virtual Datacenter base-ovdc(dd7d117e-6034-467b-b696-de1b943e8664)
task: 05706a5a-0469-404f-82b6-559c078f855a, Updated metadata for Virtual Datacenter base-ovdc(dd7d117e-6034-467b-b696-de1b943e8664), result: success

I can now verify the OrgVDC metadata has been updated with the cse command below:

$ vcd cse ovdc list
name                org                 k8s provider
------------------  ------------------  --------------
base-ovdc           base-org            native

Awesome! Now my base-org tenant users have been granted the ability to deploy Kubernetes clusters in their OrgVDC.

A Note Regarding RBAC

If you remember back to Part 1 of my series, I enabled RBAC functionality on the CSE server to allow my tenant admins the ability to control who is able to create Kubernetes clusters in their organizations. Now that I, as the vCD system admin, have enabled the base-org tenant to support Kubernetes cluster creation, it is up to the base-org tenant admin to allow specific users within their org to create clusters.

I have written a detailed blog post for configuring RBAC functionality so I won’t rehash that here, but from a high level, I have performed the following actions in my environment to onboard users in the base-org as the base-org tenant admin:

  • Logged into vcd-cli as a base-org user with the Organizational Admin role
  • Assigned the "{cse}:CSE NATIVE DEPLOY RIGHT" right to a role in org
  • Assigned above role to any user I’d like to be able to deploy Kubernetes clusters via CSE

Now the users within the base-org tenant that have the proper permissions to provision Kubernetes clusters CSE. So let’s see it in action!!

Provisioning (And Managing) Kubernetes Clusters via CSE

For the last section of the post, I’m going to switch personas to a tenant user (cse-native-user) of the base-org within the vCD environment. I have been assigned the "{cse}:CSE NATIVE DEPLOY RIGHT" right by my organization admin and I’m ready to provision clusters.

First, I’ll use vcd-cli to log in to my organization within the vCD environment:

$ vcd login director.vcd.zpod.io base-org cse-native-user -iw
Password: 
cse-native-user logged in, org: 'base-org', vdc: 'base-ovdc'

Once logged in, I’ll use the cse client to examine which Kubernetes templates are available to me:

$ vcd cse template list
name                                    revision  is_default    catalog  
------------------------------------  ----------  ------------  --------- 
ubuntu-16.04_k8-1.15_weave-2.5.2               1  False         cse-25  

And now I’m ready to provision a cluster with the following command:

$ vcd cse cluster create test-cluster -t ubuntu-16.04_k8-1.15_weave-2.5.2 -r 1 \
--network outside --ssh-key ~/.ssh/id_rsa.pub --nodes 1

cluster operation: Creating cluster vApp 'test-cluster' (2ad4df27-a7fd-4a11-bf29-f9e18eea490b) from template 'ubuntu-16.04_k8-1.15_weave-2.5.2' (revision 1), 
cluster operation: Creating master node for test-cluster (2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
cluster operation: Initializing cluster test-cluster (2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
cluster operation: Creating 1 node(s) for test-cluster(2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
cluster operation: Adding 1 node(s) to test-cluster(2ad4df27-a7fd-4a11-bf29-f9e18eea490b)
task: 8d302115-35ef-4566-a95c-f4f0000010e8, Created cluster test-cluster (2ad4df27-a7fd-4a11-bf29-f9e18eea490b), result: success

where -t is the template name, -r is the template revision number, --network is the OrgVDC network we will deploy the Kubernetes nodes on, --ssh-key is the public ssh key CSE will embed in the Kubernetes nodes to allow root access via ssh to the OS of the nodes, and --nodes is the number of worker nodes to be deployed in the cluster.

As you can see from the output of the command, the CSE server is essentially performing the following actions:

  • Creating a vApp in vCD with the cluster name specified in the cluster create command
  • Creating a Kubernetes master node utilizing the vApp template I installed during the CSE server deployment
  • Running post provisioning scripts on the master node to instantiate the VM as a master node
  • Creating a Kubernetes worker node utilizing the vApp template I installed during the CSE server deployment
  • Running post provisioning scripts on the worker node to add it into the cluster, under control of the master node

Once, I have received the final result: success message, I am ready to access my cluster! First, I’ll get some info about the cluster I just provisioned:

vcd cse cluster info test-cluster

property           value
-----------------  -------------------------------------------------------------------------------
cluster_id         2ad4df27-a7fd-4a11-bf29-f9e18eea490b
cse_version        2.5.0
k8s_provider       native
k8s_version        1.15
leader_endpoint    10.96.66.39
master_nodes       {'name': 'mstr-spxa', 'ipAddress': '10.96.66.39'}
name               test-cluster
nfs_nodes
nodes              {'name': 'node-a5i0', 'ipAddress': '10.96.66.43'}
number_of_vms      2
status             POWERED_ON
template_name      ubuntu-16.04_k8-1.15_weave-2.5.2
template_revision  1
vapp_href          https://director.vcd.zpod.io/api/vApp/vapp-17e81bd9-8995-4c4b-8965-1df9ae23e9f9
vapp_id            17e81bd9-8995-4c4b-8965-1df9ae23e9f9
vdc_href           https://director.vcd.zpod.io/api/vdc/d72b0350-9614-4692-a3b9-730c362036c6
vdc_id             d72b0350-9614-4692-a3b9-730c362036c6
vdc_name           base-ovdc

The cluster info command will give me information about cluster, including the IP addresses of the nodes, as well as the current state of the cluster, and template used to create said cluster, among other things.

Now, I’ve provisioned a cluster and I’m ready to deploy some applications!! First, I need to use CSE to obtain the cluster config file that will allow me to access the cluster via native Kubernetes tooling like kubectl:

$ vcd cse cluster config test-cluster > ~/.kube/config

The above command will grab the cluster config file from the master node of the test-cluster and pipe it into a file at the default location used by kubectl (`~/.kube/config) for cluster config files.

Now, I’ll verify connectivity to the cluster via kubectl:

$ kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
mstr-spxa   Ready    master   11m     v1.15.3
node-a5i0   Ready    <none>   8m15s   v1.15.3

Great, my cluster is up and running!! But I only deployed with 1 worker node… What if I want to add more? Do I have to redeploy? Nope!! CSE can add (and remove) worker nodes to existing clusters with the following command:

$ vcd cse cluster resize test-cluster --nodes 2 --network outside

where --nodes is the total number of worker nodes in the cluster. So in the example above, I added 1 additional worker nodes to my cluster because my original worker node count was 1.

Note: You will need to use -t and -r flags in the above command to specific the template and revision if you are not using the default template defined in the CSE server configuration file.

After performing all of my testing, I decided I’m going to delete my cluster with the following command:

$ vcd cse cluster delete test-cluster
Are you sure you want to delete the cluster? [y/N]: y

This command will delete the vApp that was created to house the cluster, which includes all components of the Kubernetes cluster. For additional information on managing Kubernetes clusters with CSE, refer to the product documentation.

Conclusion

Well if you’ve made it this far, congratulations!! I hope this walk through of installation and configuration of Container Service Extension 2.5.0 was informative. Keep an eye on the blog for more articles on Day 2 operations coming down the pipe!!

Container Service Extension 2.5 Installation: Part 1

With the recent release of the Container Service Extension 2.5.0, I wanted to take some time to walk through the installation and configuration of the Container Service Extension (CSE) server in conjunction with VMware vCloud Director 10.

This will be a series of 2 blog posts that cover the following topics:

Container Service Extension Overview

Before we get started, I wanted to talk a bit about CSE and what purpose it serves in a Service Provider’s environment. The Container Service Extension is a VMware vCloud Director extension that helps tenants create, lifecycle manage, and interact with Kubernetes clusters in vCloud Director-managed environments.

There are currently two versions of CSE: Standard and Enterprise. CSE Standard brings Kubernetes-as-a-Service to vCD by creating customized vApp templates and enabling tenant/organization administrators to deploy fully functional Kubernetes clusters in self-contained vApps. CSE Standard cluster creation can be enabled on existing NSX-V backed OrgVDCs in a tenant’s environment. With the release of CSE Enterprise in the CSE 2.0 release, VMware has also added the ability for tenants to provision VMware Enterprise PKS Kubernetes clusters back by NSX-T resources in vCloud Director managed environments. In this blog post, I am going to focus on the enablement of CSE Standard Kubernetes cluster creation in an existing vCloud Director OvDC.

For more information on CSE, have a look at the Kubernetes-as-a-Service in vCloud Director reference architecture (authored by yours truly 😄) as well as the CSE Installation Documentation.

Prerequisites

In order to install CSE 2.5.0, please ensure you review the CSE Server Installation Prerequisites section of the CSE documentation to ensure you have fulfilled all of the vCD specific requirements to support CSE Standard Kubernetes cluster deployment. As mentioned in the aforementioned documentation, VMware recommends utilizing a user with System administrator in the vCD environment for CSE server management.

Along with the prereqs mentioned in the documentation above, please ensure you have a RabbitMQ server available as the CSE server utilizes AMQP as a messaging queue to communicate with the vCD cell, as referenced in the diagram below:

For vCloud Director 10, you will need to deploy RabbitMQ 3.7.x (see vCloud Director Release notes for RabbitMQ compatibility information). For more information on deploying RabbitMQ, please refer to the RabbitMQ installation documentation.

Finally, CSE requires Python 3.7.3 or later at the time of this writing. In this walkthrough, I have chosen to install the CSE Server on a CentOS 7.6 install within a Python 3.7.3 virtual environment but any variant of Linux that supports Python 3.7.3 installations will suffice. For more information on configuring a virtual environment to support a CSE Server installation, see my earlier blog post which walks through the process.

Installing CSE Server 2.5.0

Now that I’ve established the prereqs, I am ready to install the bits that will support the CSE server installation.

Note: The following commands will need to be run on the Linux server hosting the CSE server installation.

First thing’s first, I’ll create a cse user that I’ll use to manage our CSE server:

# useradd cse
# passwd cse
# su - cse

Now, after creating our Python 3.7.3 virtual environment, I’ll need to activate it. I created my virtual environment in the ~/cse-env directory:

$ source ~/cse-env/bin/activate

Note: After activating the virtual environment, you should see a (virual-environment-name) appended to the front of your bash prompt to confirm you are operating in the virtual environment.

Now I’m ready to install the CSE server bits within the virtual environment! Utilize pip to pull down the CSE packages:

$ pip install container-service-extension

Verify CSE is installed and the version is 2.5.0

$ cse version
CSE, Container Service Extension for VMware vCloud Director, version 2.5.0

Now I’m ready to build the configuration file and deploy the CSE server!!

Container Service Extension Configuration File

The CSE server utilizes a yaml config file that contains information about the vCloud Director/vCenter infrastructure that will be supporting the Kubernetes cluster deployments. The config file also contains information regarding the RabbitMQ broker that I configured in Part 1 of the series. This config file will be used to install and run the CSE service on the CSE server.

Before we get started, I wanted to take some time to talk about how CSE deploys Kubernetes clusters. CSE uses customized VM templates (Kubernetes templates) as building blocks for deployment of Kubernetes clusters. These templates are crucial for CSE to function properly. New in version 2.5.0, CSE utilizes “pre-configured” template definitions hosted on a remote repository.

Templates vary by guest OS (e.g. PhotonOS, Ubuntu), as well as software versions, like Kubernetes, Docker, and Weave. Each template name is uniquely constructed based on the flavor of guest OS, Kubernetes, and Weave versions. The definitions of different templates reside in an official location hosted at a remote repository URL. The CSE sample config file, out of the box, points to the official location of those templates definitions. The remote repository is officially managed by maintainers of the CSE project. For more information on template management in CSE, refer to the CSE documentation.

Now that we’ve discussed some of the changes for template management in CSE 2.5.0, I’m ready to start our CSE server installation.

If you’ll remember back to Part 1 of the series, I installed the CSE bits within a Python 3.7.3 virtual environment, so the first thing I’ll do is activate that virtual environment and verify our CSE version:

Note: All commands below should be run from the CSE server CLI.

$ source cse-env/bin/activate


$ cse version
CSE, Container Service Extension for VMware vCloud Director, version 2.5.0

I’ll use the cse command to generate a sample file (I’m calling mine config.yaml) that I can use to build out my config file for my CSE installation:

$ cse sample -o config.yaml

Great! Now I have a skeleton configuration file to use to build out my CSE server config file. Let’s have a look at each section of the config file.

amqp section

The amqp section of the config file contains information about the RabbitMQ AMQP broker that the CSE server will use to communicate with the vCloud Director instance. Let’s have a look at my completed amqp section below. All of the values used below are from my lab and some will differ for your deployment:

amqp:
  exchange: cse-exchange      <--- RabbitMQ exchange name
  host: rabbitmq.vcd.zpod.io  <--- RabbitMQ hostname
  password: <password>        <--- RabbitMQ user's password
  port: 5672                  <--- RabbitMQ port (default is 5672)
  prefix: vcd                 <--- default value, can be left as is
  routing_key: cse            <--- default value, can be left as is
  ssl: false                  <--- Set to "true" if using SSL for RabbitMQ connections
  ssl_accept_all: false       <--- Set to "true" if using SSL and utilizing self-signed certs
  username: cse-amqp          <--- RabbitMQ username (with access to the vhost)
  vhost: /                    <--- RabbitMQ virtual host that contains the exchange

The exchange defined in the file above will be created by the CSE server on install (if it doesn’t already exist). This exchange should NOT be the same one configured in the Extensibility section of the vCD Admin Portal. However, the Extensibility section of the vCD Admin Portal must be configured using the same virtual host (/ in my example above). See screenshot below for my an example of my vCD Extensibility config:

No manual config is required on the RabbitMQ server side aside from ensuring the RabbitMQ user (cse-amqp in the example above) has full access to the virtual host. See my previous post on Deploying vCloud Director for information on creating RabbitMQ users.

vcd section

As you might guess, this section of the config file contains information regarding the vCloud Director instance that CSE will communicate with via the API. Let’s have a look at the vcd config

vcd:
  api_version: '33.0'            <--- vCD API version
  host: director.vcd.pzod.io     <--- vCD Hostname
  log: true                      <--- Set to "true" to generate log files for CSE/vCD interactions
  password: my_secret_password   <--- vCD system admin's password
  port: 443                      <--- default value, can be left as is unless otherwise needed 
  username: administrator        <--- vCD system admin username
  verify: false                  <--- Set to "true" to verify SSL certificates

vcs section

In this section, we define the vCenter instances that are being managed by vCD. CSE needs access to the vCenter appliances in order to perform guest operation modifications, queries, and program execution. In my lab, my vCD deployment is managing 2 vCSA instances. You can add additional if required:

vcs:
- name: vc-pks                           <--- vCenter name as it appears in vCD
  password: <password>                   <--- administrator@vsphere.local's password
  username: administrator@vsphere.local  <--- vCenter admin's username
  verify: false                          <--- Set to "true" to verify SSL certificates
- name: vc-standard
  password: <password>
  username: administrator@vsphere.local
  verify: false

service section

The service section is small and really only has one config decision to make. If the enforce_authorization flag is set to false, ANY user that has permissions to create vApps in any Org in the vCD environment can provision Kubernetes clusters via CSE. If set to true, you can utilize RBAC functionality to grant specific Orgs and specific users within those Orgs rights to create clusters. When set to true, the enforce_authorization flag defaults to refusing any request to create Kubernetes clusters via CSE unless a user (and its org) has the proper rights assigned to allow the operation. For more information on configuring RBAC, see my previous blog post that walks through RBAC enablement scenarios (although the blog post was authored utilizing CSE 2.0, the constructs have not changed in 2.5.0).

service:
  enforce_authorization: true
  listeners: 5                  <--- number of threads CSE server can utilize
  log_wire: false               <--- if set to "true", will log all REST calls initiated by CSE to vCD

broker section

Here’s where all the magic happens!! The broker sections is where we define where and how the CSE server will deploy the first Kubernetes cluster that will serve as a basis for a vApp template that will be used for tenants’ Kubernetes cluster deployments.

  • The catalog value is the name CSE will use when creating a publicly shared catalog within my org for storing the vApp templates(s). The CSE server will create this catalog in vCD when I install the CSE server.

  • The default_template_name value is the template name that CSE will use by default when users deploy Kubernetes clusters via CSE without defining a specific template. Refer to the following link from the CSE documentation for available template names and revision numbers.

  • The default_template_revision value is a numerical value associated with the version of the template released by VMware. At the time of writing, all available templates are at revision 1.

  • The ip_allocation_mode value is the mode to be used during the install process to build the template. Possible values are dhcp or pool. During creation of clusters for tenants, pool IP allocation mode is always used.

  • The network value is an OrgVDC Network within the OrgVDC that will be used during the install process to build the template. It should have outbound access to the public internet in order to reach the template repository. The CSE server does not need to be connected to this network.

  • The org value is the organization that contains the shared catalog where the Kubernetes vApp templates will be stored.

  • The remote_template_cookbook_url value is the URL of the template repository where all template definitions and associated script files are hosted. This is new in CSE 2.5.0.

  • The storage_profile is the name of the storage profile to use when creating the temporary vApp used to build the Kubernetes cluster vApp template.

  • The vdc value is the virtual datacenter within the org (defined above) that will be used during the install process to build the vApp template.

Here is an example of the completed broker section:

broker:
  catalog: cse-25
  default_template_name: ubuntu-16.04_k8-1.15_weave-2.5.2
  default_template_revision: 1
  ip_allocation_mode: pool
  network: outside
  org: cse_25_test
  remote_template_cookbook_url: https://raw.githubusercontent.com/vmware/container-service-extension-templates/master/template.yaml
  storage_profile: '*'
  vdc: cse_vdc_1

template_rules section

This section is new in CSE 2.5.0 and is entirely optional. The template_rules section allows system admins to utilize vCD compute policies to limit which users have access to which Kubernetes templates. By default, any user that has access to create Kubernetes clusters via CSE also has access to all templates available. Use the template_rules section, along with compute policies, to limit which users have access to which Kubernetes templates.

pks_config section

This section points to a seperate .yaml config file that contains information about a VMware Enterprise PKS deployment if you are intended to utilize CSE Enterprise as well. I will detail this process in a later post, but for now, I’m going to install with only CSE Standard.

Note: System admins can add CSE Enterprise capabilities via the pks_config flag at any point after CSE server installation, it does not have to be set on initial install.

pks_config: null  <--- Set to name of .yaml config file for CSE Enterprise cluster deployment

Now that I’ve gone over the config file, I am ready to proceed with my installation of the CSE server!!

CSE Server Installation and Validation

Before starting the install, we need to set the correct permissions on the config file:

chmod 600 config.yaml

After building out the config file, I’ll simple need to run the following command to install CSE in the environment. I’ll use the --skip-template-creation flag to ensure the configuration is sound and install the desired template in a subsequent command:

cse install -c config.yaml --skip-template-creation

Required Python version: >= 3.7.3
Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
Validating config file 'config.yaml'
Connected to AMQP server (rabbitmq.vcd.zpod.io:5672)
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised.
Connected to vCloud Director (director.vcd.zpod.io:443)
Connected to vCenter Server 'vc-standard' as 'administrator@vcd.zpod.io' (vcsa.vcd.zpod.io:443)
Connected to vCenter Server 'vc-pks' as 'administrator@pks.zpod.io' (vcsa.pks.zpod.io:443)
Config file 'config.yaml' is valid
Installing CSE on vCloud Director using config file 'config.yaml'
Connected to vCD as system administrator: director.vcd.zpod.io:443
Checking for AMQP exchange 'cse-exchange'
AMQP exchange 'cse-exchange' is ready
Updated cse API Extension in vCD
Right: CSE NATIVE DEPLOY RIGHT added to vCD
Right: CSE NATIVE DEPLOY RIGHT assigned to System organization.
Right: PKS DEPLOY RIGHT added to vCD
Right: PKS DEPLOY RIGHT assigned to System organization.
Created catalog 'cse-25'
Skipping creation of templates

Great!! I’ve installed the CSE Server. Now I’m ready to deploy a Kubernetes cluster vApp template into my cse-25 catalog. I can obtain a template name from the Template Announcement section of the CSE documentation. I can also use the following cse command from the CLI of the CSE server to query available templates:

$ cse template list -d remote

I can also define an ssh-key that will be injected into the VMs that are provisioned to act as the Kubernetes nodes with the --ssh-key flag. The system admin could then use the private ssh-key to access the OS of the Kubernetes nodes’ operating system via SSH. I’ll use the following cse command to install the Ubuntu Kubernetes template:

$ cse template install ubuntu-16.04_k8-1.15_weave-2.5.2 –-ssh-key id_rsa.pub

This command pulls down a Ubuntu OVA to the CSE server and then pushes it to the vCD environment, creates a set of VMs, and performs all required post provisioning customization to create a functioning Kubernetes cluster.

After the Kubernetes cluster is created, CSE creates a vApp template based on the cluster and then deletes the running cluster from the environment. This vApp template will then be used by CSE to create Kubernetes clusters when tenants use the vcd-cli to create clusters.

Now I’m finally ready to test our install with the cse run command, which will run the CSE service in the current bash shell:

$ cse run

---output omitted---

AMQP exchange 'vcd' exists
CSE on vCD is currently enabled
Found catalog 'cse-25'
CSE installation is valid
Started thread 'MessageConsumer-0 (140180650903296)'
Started thread 'MessageConsumer-1 (140180417672960)'
Started thread 'MessageConsumer-2 (140180634117888)'
Started thread 'MessageConsumer-3 (140180642510592)'
Started thread 'MessageConsumer-4 (140180409280256)'
Container Service Extension for vCloud Director
Server running using config file: config.yaml
Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log
waiting for requests (ctrl+c to close)

Awesome!! We can see the AMQP threads are created in the output and the server is running using my config file. Use ctrl+c to stop the service and return to the command prompt.

Controlling the CSE Service with systemctl

As you can see above, I can manually run the CSE Server with the cse run command, but it makes more sense to be able to automate the starting and stopping of the CSE service. To do that, I’ll create a systemd unit file and manage the CSE service via systemctl.

First, I’ll need to create a script that the systemd unit file will refer to in order to start the service. My virtual environment is located at /home/cse/cse-env and my CSE config file is located at /home/cse/config.yaml.

I’ll use vi to create the cse.sh file:

$ vi ~/cse.sh

And add the following text to the new file and save:

#!/usr/bin/env bash

source /home/cse/cse-env/bin/activate
cse run -c /home/cse/config.yaml

Now that I’ve created the start script, I need to create a unit file for systemd. I’ll access the root user on the CSE server:

$ su -

Now I’m ready to create the unit file. I’ll use vi to create the /etc/systemd/system/cse.service file:

# vi /etc/systemd/system/cse.service

And add the following text to the file:

[Service]
ExecStart=/bin/sh /home/cse/cse.sh
Type=simple
User=cse
WorkingDirectory=/home/cse
Restart=always
[Install]
WantedBy=multi-user.target

After adding the unit file, I’ll need to reload the systemctl daemon:

# systemctl daemon-reload

Now I’ll start the CSE service and enable it to ensure it starts automatically on boot:

# systemctl start cse
# systemctl enable cse

Finally, I’ll check the status of the service to ensure it is active and verify we see the messaging threads:

# service cse status
Redirecting to /bin/systemctl status cse.service
● cse.service
   Loaded: loaded (/etc/systemd/system/cse.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-10-10 17:00:50 EDT; 13s ago
 Main PID: 9621 (sh)
   CGroup: /system.slice/cse.service
           ├─9621 /bin/sh /home/cse/cse.sh
           └─9624 /home/cse/cse-ga/bin/python3.7 /home/cse/cse-ga/bin/cse run -c /home/cse/config.yaml

Oct 10 17:00:59 cse-25.vcd.zpod.io sh[9621]: CSE installation is valid
Oct 10 17:01:00 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-0 (139712918025984)'
Oct 10 17:01:00 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-1 (139712892847872)'
Oct 10 17:01:00 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-2 (139712901240576)'
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-3 (139712909633280)'
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-4 (139712882005760)'
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Container Service Extension for vCloud Director
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Server running using config file: /home/cse/config.yaml
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: waiting for requests (ctrl+c to close)

Success!! Now I’m ready to start interacting with the CSE server with the CSE client via the vcd-cli tool.

Conclusion

In Part 1 of my series on CSE Installation, I detailed the steps required to install the CSE 2.5.0 bits within a Python 3.7.3 virtual environment. I also took a detailed look at the configuration file used to power the CSE Server before installing and running the server itself.

Join me in Part 2 of this series on the Container Service Extension where I’ll walk through configuring a tenant to allow provisioning of Kubernetes cluster via the CSE extension in vcd-cli!!

Backing Up Your Kubernetes Applications with Velero v1.1

In this post, I’m going to walk through the process of installing and using Velero v1.1 to back up a Kubernetes application that includes persistent data stored in persisentvolumes. I will then simulate a DR scenario by completely deleting the application and using Velero to restore the application to the cluster, including the persistent data.

Meet Velero!! ⛵

Velero is a backup and recovery solution built specifically to assist in the backup (and migration) of Kubernetes applications, including their persistent storage volumes. You can even use Velero to back up an entire Kubernetes cluster for restore and/or migration! Velero address various use cases, including but not limited to:

  • Taking backups of your cluster to allow for restore in case of infrastructure loss/corruption
  • Migration of cluster resources to other clusters
  • Replication of production cluster/applications to dev and test clusters

Velero is essentially comprised of two components:

  • A server that runs as a set of resources with your Kubernetes cluster
  • A command-line client that runs locally

Velero also supports the back up and restore of Kubernetes volumes using restic, an open source backup tool. Velero will need to utilize a S3 API-compatible storage server to store these volumes. To satisfy this requirement, I will also deploy a Minio server in my Kubernetes cluster so Velero is able to store my Kubernetes volume backups. Minio is a light weight, easy to deploy S3 object store that you can run on premises. In a production environment, you’d want to deploy your S3 compatible storage solution in another cluster or environment to prevent from total data loss in case of infrastructure failure.

Environment Overview

As a level set, I’d like to provide a little information about the infrastructure I am using in my lab environment. See below for infrastructure details:

  • VMware vCenter Server Appliance 6.7u2
  • VMware ESXi 6.7u2
  • VMware NSX-T Datacenter 2.5.0
  • VMware Enterprise PKS 1.5.0

Enterprise PKS handles the Day1 and Day2 operational requirements for deploying and managing my Kubernetes clusters. Click here for additional information on VMware Enterprise PKS.

However, I do want to mention that Velero can be installed and configured to interact with ANY Kubernetes cluster of version 1.7 or later (1.10 or later for restic support).

Installing Minio

First, I’ll deploy all of the components required to support the Velero service, starting with Minio.

First things first, I’ll create the velero namespace to house the Velero installation in the cluster:

$ kubectl create namespace velero

I also decided to create a dedicated storageclass for the Minio service to use for its persistent storage. In Enterprise PKS Kubernetes clusters, you can configure the vSphere Cloud Provider plugin to dynamically create VMDKs in your vSphere environment to support persistentvolumes whenever a persistentvolumeclaim is created in the Kubernetes cluster. Click here for more information on the vSphere Cloud Provider plugin:

$ kubectl create -f minio-storage-class.yaml 


kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: minio-disk
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin

Now that we have a storage class, I’m ready to create a persistentvolumeclaim the Minio service will use to store the volume backups via restic. As you can see from the example .yaml file below, the previously created storageclass is referenced to ensure the persistentvolume is provisioned dynamically:

$ kubectl create -f minio-pvc.yaml


kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: minio-disk
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin
 jomann-a01:VeleroBackup jomann$ cat minio-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: minio-claim
  namespace: velero
  annotations:
    volume.beta.kubernetes.io/storage-class: minio-disk
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 6Gi

Verify the persistentvolumeclaim was created and its status is Bound:

$ kubectl get pvc -n velero

NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
minio-claim   Bound    pvc-cc7ac855-e5f0-11e9-b7eb-00505697e7e7   6Gi        RWO            minio-disk     8s

Now that I’ve created the storage to support the Minio deployment, I am ready to create the Minio deployment. Click here for access to the full .yaml file for the Minio deployment:

$ kubectl create -f minio-deploy.yaml 

deployment.apps/minio created
service/minio created
secret/cloud-credentials created
job.batch/minio-setup created
ingress.extensions/velero-minio created

Use kubectl to wait for the minio-xxxx pod to enter the Running status:

$ kubectl get pods -n velero -w

NAME                    READY   STATUS              RESTARTS   AGE
minio-754667444-zc2t2   0/1     ContainerCreating   0          4s
minio-setup-skbs6       1/1     Running             0          4s
NAME                    READY   STATUS              RESTARTS   AGE
minio-754667444-zc2t2   1/1     Running             0          9s
minio-setup-skbs6       0/1     Completed           0          11s

Now that our Minio application is deployed, we need to expose the Minio service to requests outside of the cluster via a LoadBalancer service type with the following command:

$ kubectl expose deployment minio --name=velero-minio-lb --port=9000 --target-port=9000 --type=LoadBalancer --namespace=velero

Note, because of the integration between VMware Enterprise PKS and VMware NSX-T Datacenter, when I create a “LoadBalancer” service type in the cluster, the NSX Container Plugin, which we are using as our Container Network Interface, reaches out to the NSX-T API to automatically provision a virtual server in a NSX-T L4 load balancer.

I’ll use kubectl to retrieve the IP of the virtual server created within the NSX-T load balancer and access the Minio UI in my browser at EXTERNAL-IP:9000 I am looking for the IP address under the EXTERNAL-IP section for the velero-minio-lb service, 10.96.59.116 in this case:

$ kubectl get services -n velero

NAME              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE
minio             ClusterIP      10.100.200.160   <none>         9000/TCP         7m14s
velero-minio-lb   LoadBalancer   10.100.200.77    10.96.59.116   9000:30711/TCP   12s

Now that Minio has been succesfully deployed in the my Kubernetes cluster, I’m ready to move on to the next section to install and configure Velero and restic.

Installing Velero and Restic

Now that I have an s3-compatible storage solution deployed in my environment, I am ready to complete the installation of Velero (and restic).

However, before I move forward with the installation of Velero, I need to install the Velero CLI client on my workstation. The instructions detailed below will allow you to install the client on a Linux server (I’m using a CentOS 7 instance).

First, I navigated to the Velero github releases page and copied the link for the v1.1 .rpm file for my OS distribution:

Then, I used wget to pull the image down to my linux server, extracted the contents of the file, and moved the velero binary into my path:

cd ~/tmp
wget https://github.com/vmware-tanzu/velero/releases/download/v1.1.0/velero-v1.1.0-linux-amd64.tar.gz
tar -xvf https://github.com/vmware-tanzu/velero/releases/download/v1.1.0/velero-v1.1.0-linux-amd64.tar.gz
sudo mv velero-v1.1.0-linux-amd64/velero /usr/bin/velero

Now that I have the Velero client installed on my server, I am ready to continue with the installation.

I’ll create a credentials-velero file that we will use during install to authenticate against the Minio service. Velero will use these credentials to access Minio to store volume backups:

$ cat credentials-velero

[default]
aws_access_key_id = minio
aws_secret_access_key = minio123

Now I’m ready to install Velero! The following command will complete the installation of Velero (and restic) where:

  • --provider aws instructs Velero to utilize S3 storage which is running on-prem, in my case
  • --secret-file is our Minio credentials
  • --use-restic flag ensures Velero knows to deploy restic for persistentvolume backups
  • --s3Url value is the address of the Minio service that is only resolvable from within the Kubernetes cluster * --publicUrl value is the IP address for the LoadBalancer service that allows access to the Minio UI from outside of the cluster:

    $ velero install –provider aws –bucket velero –secret-file credentials-velero \ –use-volume-snapshots=false –use-restic –backup-location-config \ region=minio,s3ForcePathStyle=”true”,s3Url=http://minio.velero.svc:9000,publicUrl=http://10.96.59.116:9000

    —output omitted—

Velero is installed! ⛵ Use ‘kubectl logs deployment/velero -n velero’ to view the status.

Note: The velero install command creates a set of CRDs that power the Velero service. You can run velero install --dry-run -o yaml to output all of the .yaml files used to create the Velero deployment.

After the installation is complete, I’ll verify that I have 3 restic-xxx pods and 1 velero-xxx pod deployed in the velero namespace. As the restic service is deployed as a daemonset, I will expect to see a restic pod per node in my cluster. I have 3 worker nodes so I should see 3 restic pods:

Note: Notice the status of the restic-xxx pods…

$ kubectl get pod -n velero
NAME                      READY   STATUS             RESTARTS   AGE
minio-5559c4749-7xssq     1/1     Running            0          7m21s
minio-setup-dhnrr         0/1     Completed          0          7m21s
restic-mwgsd              0/1     CrashLoopBackOff   4          2m17s
restic-xmbzz              0/1     CrashLoopBackOff   4          2m17s
restic-235cz              0/1     CrashLoopBackOff   4          2m17s
velero-7d876dbdc7-z4tjm   1/1     Running            0          2m17s

As you may notice, the restic pods are not able to start. That is because in Enterprise PKS Kubernetes clusters, the path to the pods on the nodes is a little different (/var/vcap/data/kubelet/pods) than in “vanilla” Kubernetes clusters (/var/lib/kubelet/pods). In order to allow the restic pods to run as expected, I’ll need to edit the restic daemon set and change the hostPath variable as referenced below:

$ kubectl edit daemonset restic -n velero


volumes:
      - hostPath:
          path: /var/vcap/data/kubelet/pods
          type: ""
        name: host-pods

Now I’ll verify all of the restic pods are in Running status:

$ kubectl get pod -n velero

NAME                      READY   STATUS      RESTARTS   AGE
minio-5559c4749-7xssq     1/1     Running     0          12m
minio-setup-dhnrr         0/1     Completed   0          12m
restic-p4d2c              1/1     Running     0          6s
restic-xvxkh              1/1     Running     0          6s
restic-e31da              1/1     Running     0          6s
velero-7d876dbdc7-z4tjm   1/1     Running     0          7m36s

Woohoo!! Velero is successfully deployed in my Kubernetes clusters. Now I’m ready to take some backups!!

Backup/Restore the WordPress Application using Velero

Now that I’ve deployed Velero and all of its supporting components in my cluster, I’m ready to perform some backups. But in order to taste my backup/recovery solution, I’ll need an app that preferably utilizes persistent data.

In one of my previous blog posts, I walked through the process of deploying Kubeapps in my cluster to allow me to easily deploy application stacks to my Kubernetes cluster.

For this exercise, I’ve used Kubeapps to deploy a WordPress blog that utilizes persistentvolumes to store post data for my blog. I’ve also populated the blog with a test post to test backup and recovery.

First, I’ll verify that the WordPress pods are in a Running state:

$ kubectl get pods -n wordpress

NAME                                  READY   STATUS    RESTARTS   AGE
cut-birds-mariadb-0                   1/1     Running   0          23h
cut-birds-wordpress-fbb7f5b76-lm5bh   1/1     Running   0          23h

I’ll also verify the URL of my blog and access it via my web browser to verify current state:

$ kubectl get svc -n wordpress

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
cut-birds-mariadb     ClusterIP      10.100.200.39   <none>         3306/TCP                     19h
cut-birds-wordpress   LoadBalancer   10.100.200.32   10.96.59.116   80:32393/TCP,443:31585/TCP   19h

Everything looks good, especially the cat!!

In order for Velero to understand where to look for persistent data to back up, in addition to other Kubernetes resources in the cluster, we need to annotate each pod that is utilizing a volume so Velero backups up the pods AND the volumes.

I’ll review both of the pods in the wordpress namespace to view the name of each volume being used by each pod:

$ kubectl describe pod/cut-birds-mariadb-0 -n wordpress

---output omitted---

Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-cut-birds-mariadb-0
    ReadOnly:   false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      cut-birds-mariadb
    Optional:  false
  default-token-6q5xt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6q5xt
    Optional:    false


$ kubectl describe pods/cut-birds-wordpress-fbb7f5b76-lm5bh -n wordpress

---output omitted---

Volumes:
  wordpress-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cut-birds-wordpress
    ReadOnly:   false
  default-token-6q5xt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6q5xt
    Optional:    false

As you can see, the mariadb pod is using 2 volumes: data and config, while the wordpress pod is utilizing a single volume: wordpress-data.

I’ll run the following commands to annotate each pod with the backup.velero.io tag with each pods’ corresponding volume(s):

$ kubectl -n wordpress annotate pod/cut-birds-mariadb-0 backup.velero.io/backup-volumes=data,config
$ kubectl -n wordpress annotate pod/cut-birds-wordpress-fbb7f5b76-lm5bh backup.velero.io/backup-volumes=wordpress-data

Now I’m ready to use the velero client to create a backup. I’ll name the backup wordpress-backup and ensure the backup only includes the resources in the wordpress namespace:

$ velero backup create wordpress-backup --include-namespaces wordpress

Backup request "wordpress-backup" submitted successfully.
Run `velero backup describe wordpress-backup` or `velero backup logs wordpress-backup` for more details.

I can also use the velero client to ensure the backup is compeleted by waiting for Phase: Complete:

$ velero backup describe wordpress-backup

Name:         wordpress-backup
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  <none>

Phase:  Completed

--output omitted--

I’ll navigate back to the web browser and refresh (or log back into) the Minio UI. Notice the restic folder, which holds houses our backups persistent data, as well as a backups folder:

I’ll select the backups folder and note the wordpress-backup folder in the subsequent directory. I’ll also explore the contents of the wordpress-backup folder, which contains all of the Kubernetes resources from mywordpress namespace:

Now that I’ve confirmed my backup was successful and have verified the data has been stored in Minio via the web UI, I am ready to completely delete my WordPress application. I will accomplish this by deleting the wordpress namespace, which will delete all resources created in the namespace to support the WordPress application, even the persistentvolumeclaims

$ kubectl delete namespace wordpress


$ kubectl get pods -n wordpress
$ kubectl get pvc -n wordpress

After I’ve confirmed all of the resources in the wordpress namespace have been deleted, I’ll refresh the browser to verify the blog is no longer available.

Now we’re ready to backup!! I’ll use the velero client to verify the existence/name of the backup that was previously created and restore the backup to the cluster:

$ velero backup get

NAME               STATUS      CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
wordpress-backup   Completed   2019-10-03 15:47:07 -0400 EDT   29d       default            <none>


$ velero restore create --from-backup wordpress-backup

I can monitor the pods in the wordpress namespace and wait for both pods to show 1/1 in the READY column and Running in the STATUS column:

$ kubectl get pods -n wordpress -w

NAME                                  READY   STATUS     RESTARTS   AGE
cut-birds-mariadb-0                   0/1     Init:0/1   0          12s
cut-birds-wordpress-fbb7f5b76-qtcpp   0/1     Init:0/1   0          13s
cut-birds-mariadb-0                   0/1     PodInitializing   0          18s
cut-birds-mariadb-0                   0/1     Running           0          19s
cut-birds-wordpress-fbb7f5b76-qtcpp   0/1     PodInitializing   0          19s
cut-birds-wordpress-fbb7f5b76-qtcpp   0/1     Running           0          20s
cut-birds-mariadb-0                   1/1     Running           0          54s
cut-birds-wordpress-fbb7f5b76-qtcpp   1/1     Running           0          112s

Then, I can verify the URL of the WordPress blog:

$ kubectl get services -n wordpress

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
cut-birds-mariadb     ClusterIP      10.100.200.39   <none>         3306/TCP                     2m56s
cut-birds-wordpress   LoadBalancer   10.100.200.32   10.96.59.120   80:32393/TCP,443:31585/TCP   2m56s

And finally, I can access the URL of the blog in the web broswer, confirm the test post that was visible initially is still present:

There you have it!! Our application and it’s persistent data have been completely restored!!

In this example, we manually created a backup, but we can also use the Velero client to schedule backups on a certain interval. See examples below:

velero schedule create planes-daily --schedule="0 1 * * *" --include-namespaces wordpress
velero schedule create planes-daily --schedule="@daily" --include-namespaces wordpress

Conclusion

In this blog post, I walked through the process of installing Velero in a Kubernetes cluster, including all it’s required components, to support taking backups of Kubernetes resources. I also walked through the process for taking a backup, simulating a data loss scenario, and restoring that backup to the cluster.

Using Harbor and Kubeapps to Serve Custom Helm Charts

In my last post, I walked through the process of deploying Kubeapps in an Enterprise PKS Kubernetes cluster. In this post, I wanted to examine the workflow required for utilizing Harbor, an open source cloud native registry, as an option to serve out a curated set of Helm charts to developers in an organization. We’ll walk through a couple of scenarios, including configuring a “private” project in Harbor that houses Helm charts and container images for a specific group of developers. Building on my last post, we’ll also add this new Helm chart repository into our Kubeapps deployment to allow our developers to deploy our curated applications directly from the Kubeapps dashboard.

Harbor is an an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing. Enterprise support for Harbor Container Registry is included with VMware Enterprise PKS.

Along with the ability to host container images, Harbor also recently added functionality to act as a Helm chart repository. Harbor admins create “projects” that are normally dedicated to certain teams or environments. These projects, public or private, house container images as well as Helm charts to allow our developers to easily deploy curated applications in their Kubernetes cluster(s).

We already have Harbor deployed in our environment as an OpsMan tile. For more information on installing Harbor in conjunction with Enterprise PKS, see documentation here. For instructions detailing the Harbor installation procedure outside of an Enterprise PKS deployment, see the community documentation here.

Let’s get started!!

Creating a Private Project in Harbor

The first thing we’ll need to do is create a new private project that we’ll use to store our container images and Helm charts for our group of developers.

Navigate to the Harbor web UI and login with the admin credentials defined on install. Once logged in, select the + New Project button above the list of existing projects:

Name the project (developers-private-project in our case) and leave the Public option unchecked, as we only want our specific developer group to have access to this project:

Select the newly created project from the list and note the different menus we have available to us regarding the project, including Repositories, which will house our container images, as well as Helm Charts, which will house our Helm charts. We can also add individual members to the project to allow them to authenticate to the project with a username/password combination when pulling/pushing images or Helm charts to the project. For now, let’s select the Configuration tab and select the Automatically scan images on push option. This will instruct Harbor to scan container images for possible CVEs when they are uploaded to the project. Select Save:

Now that we’ve configured our private project, we need to upload our container image that will serve as the basis for our app.

Upload Image to Private Harbor Project

Now that we’ve created our project, we need to populate the project with the container image we are going to use to power this application.

In this example, we are using a simple “To Do List” application. Additional details on the application can be found here.

You’ll need access to a server with docker installed to perform this workflow. I am using the same Linux server where my Helm client is installed.

First, pull the docker image from the public repository:

$ docker pull prydonius/todo

Verify the image has been pulled:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
prydonius/todo      latest              4089c4ba4620        24 months ago       107MB

Since the project is private, we need to use docker login to authenticate against Harbor. Use the Harbor admin user credentials to authenticate:

$ docker login harbor.pks.zpod.io

Login Succeeded

Now we can tag the todo image with the Harbor url as well as the repo name and tag (which we define, v1 in this case) and push it to our private registry:

$ docker tag prydonius/todo:latest harbor.pks.zpod.io/developers-private-project/todo:v1


$ docker push harbor.pks.zpod.io/developers-private-project/todo:v1

Let’s head over to the Harbor web UI and ensure our image has been successfully uploaded. Navigate to the Projects tab in the left hand menu, select the developers-private-project project, and ensure the todo image is present:

While we here, let’s click on the link for the image and examine the vulnerabilities:

As we selected the option to scan all images on push, our todo container was automatically scanned when it was uploaded. There are a couple of vulnerabilities of “High” severity that we’d want to examine before pushing this app to production. Harbor also provides the ability to set rules in the configuration for each project to ensure containers with known vulnerabilities are not deployed in clusters. Our development environment is not exposed outside of our datacenter so we can let this slide…for now.

Now that we’ve uploaded our container image, we are ready to build our custom Helm chart that will utilize this image in our Harbor repository to build the application in our Kubernetes cluster.

Creating our Custom Helm Chart

As discussed in the last post, Helm uses charts, a collection of files that describe a related set of Kubernetes resources, to simplify the deployment of applications in a Kubernetes cluster. Today, we are going to build a simple Helm chart that deploys our todo app and exposes the app via a load balancer.

We’ll navigate to the server running the Helm client and issue the following command which will build out the scaffolding required for a Helm chart. We’ll call this chart dev-to-do-chart:

$ helm create dev-to-do-chart

The following directory structure will be created:

dev-to-do-chart
|-- Chart.yaml
|-- charts
|-- templates
|   |-- NOTES.txt
|   |-- _helpers.tpl
|   |-- deployment.yaml
|   |-- ingress.yaml
|   `-- service.yaml
`-- values.yaml

The templates/ directory is where Helm finds the YAML definitions for your Services, Deployments and other Kubernetes objects. We will define variables for our deployment in the values.yaml file. Values here can be dynamically set at deployment time to define things such as using an Ingress resource to expose the application or assigning persistent storage to the app.

Let’s edit the values.yaml file to add a couple of additional bits of information. We want to define the image that we will use to back our application deployment. We’ll use the todo container image that we just uploaded to our private project.

Also, since this project/repository is private, we need to create a Kubernetes secret that contains access information for the repository so Kubernetes (and docker) is allowed to pull the image. For additional information on this process, see the Kubernetes documentation here.

In our example, I have created the private-repo-sec secret that we will add to the values.yaml, along with the image name:

$ vi dev-to-do-chart/templates/values.yaml

---
image:
  repository: harbor.pks.zpod.io/developers-private-project/todo
  tag: v1
  pullPolicy: IfNotPresent
imagePullSecrets:
- name: private-repo-sec
___

This will instruct Helm to build a Kubernetes deployment that contains a pod comprised of our todo container from our developers-private-project repo and utilize the private-repo-sec secret to authenticate to the private project.

Let’s also create a README.md (in the dev-to-do-chart directory) file that will display information about the Helm chart that will be visible in our Kubeapps dashboard:

$ vi dev-to-do-chart/README.md 

___
This chart will deploy the "To Do" application. 

Set "Service" to type "LoadBalancer" in the values file to expose the application via an L4 NSX-T load balancer.
___

Now that we’ve configured our chart, we need to package it up so we can upload it to our Harbor chart repo to share with our developers. Navigate back to the parent directory and run the following command to package the chart:

$ helm package ./dev-to-do-chart
Successfully packaged chart and saved it to: /home/user/dev-to-do-chart-0.1.0.tgz

We’ve created and packaged our custom Helm chart for our developers, now we’re ready to upload the chart to Harbor so they can deploy the todo application!!

Uploading Custom Helm Chart to Harbor

There are two ways to upload a Helm chart to harbor:

  • Via the Harbor web UI
  • Via the Helm CLI tool

We are going to use the Helm CLI tool to push the chart to our private project. The first thing we’ll need to do is grab the ca.crt for our project which will allow us to add the chart repo from our Harbor project to our local Helm client.

Navigate back to the homepage for the developers-private-project and select the Registry Certificate link:

This will download the ca.crt that we can use in the following command to push our Helm chart to our project. Since the project is private, we will need to authenticate with the admin users credentials as well as the ca.crt when we add the repo to our Helm repo list:

Note: These commands should be run from the Linux server where the Helm client is installed.

helm repo add developers-private-project --ca-file=ca.crt --username=admin --password=<password> https://harbor.pks.zpod.io/chartrepo/developers-private-project

Let’s verify the repo was added to our Helm repo list:

$ helm repo list
NAME                        URL                                                                                               
developers-private-project  https://harbor.pks.zpod.io/chartrepo/developers-private-project

It should be noted, the native Helm CLI does not support pushing charts so we need to install the helm-push plugin:

$ helm plugin install https://github.com/chartmuseum/helm-push

Now we’re ready to push our chart to our Harbor project:

$ helm push --ca-file=ca.crt --username=admin --password=<password> dev-to-do-chart-0.1.0.tgz developers-private-project
Pushing dev-to-do-chart-0.1.0.tgz to developers-private-project...
Done.

Let’s update our helm repos and search for our chart via the Helm CLI to confirm it is available in our project’s chart repo:

$ helm repo update


$ helm search dev-to-do
NAME                                        CHART VERSION   APP VERSION DESCRIPTION                
developers-private-project/dev-to-do-chart  0.1.0           1.0         A Helm chart for Kubernetes
local/dev-to-do-chart                       0.1.0           1.0         A Helm chart for Kubernetes

Now let’s confirm we can see it in the Harbor web UI as well. Navigate back to the developers-private-project homepage and select the Helm Charts tab:

Awesome!! Now we’re finally ready to add our private chart repo into our Kubeapps deployment so our developers can deploy our to-do app via the Kubeapps dashboard.

Adding a Private Project Helm Chart Repo to Kubeapps

Now that we’ve created our private project, populated with our custom container image and helm chart, we are ready to add the Helm chart repo into our Kubeapps deployment so our developers can deploy the to-do application via the Kubeapps dashboard.

First, we to access our Kubeapps dashboard. Once we’ve authenticated with our token, hover over the Configuration button in the top right-hand corner and select the App Repositories option from the drop down:

Select the Add App Repository button and file in the required details. We are using basic authentication with the Harbor admin user’s credentials. We also will need to add our ca.crt file as well. When finished, select the Install Repo button:

If all the credentials have been populated correctly, we can click on the developers-private-project link and see our dev-to-do-cart Helm chart:

Now, our developers can log in to the Kubeapps dashboard, select the Catalog option, search for our dev-to-do-chart, click on the entry, and select the Deploy button on the subsequent browner page:

In order for our developers to expose this app to access outside of the Kubernetes cluster, we need to change the Service from ClusterIP to LoadBalancer:

Once they’ve made this change, they can select the Submit button to deploy the application in their Kubernetes cluster. The subsequent webpage will show us information about our deployment, including the URL (IP of the NSX-T load balancer that was automatically created, highlighted with a red box in the screenshot) as well as the current state of the deployment:

Note: The automatic creation of the LoadBalancer service is made possible by the integration between NSX-T and Enterprise PKS. These instructions will need to be augmented to provide this same functionality running on a different set of infrastructure.

Navigate to the IP address of the load balancer to test application access:

Boom!! There we have it, our application being served out via our NSX-T L4 load balancer resource.

Conclusion

In this post, we walked through the steps required to create a private Harbor project for our developers that will house custom container images and Helm charts as well as building a custom Helm chart and uploading our container image and custom Helm chart to that private project.

We also walked through the process of adding a private Helm chart repo, hosted by our Harbor deployment, in to our Kubeapps dashboard so our developers can deploy this custom application for testing in their Kubernetes clusters.

Deploying Kubeapps and Exposing the Dashboard via Ingress Controller in Enterprise PKS

In this post, I’d like to take some time to walk through the process of deploying Kubeapps in an Enterprise PKS kubernetes cluster. I’ll also walk through the process of utilizing the built-in ingress controller provided by NSX-T to expose the Kubeapps dashboard via a fully qualified domain name.

What is Kubeapps?

There’s been a lot of excitement in the Cloud Native space at VMware since the acquisition of Bitnami last year. The Bitnami team has done a lot of amazing work over the years to simplify the process of application deployment across all types of infrastructure, both in public and private clouds. Today we are going to take a look at Kubeapps. Kubeapps, an open source project developed by the folks at Bitnami, is a web-based UI for deploying and managing applications in Kubernetes clusters. Kubeapps allows users to:

  • Browse and deploy Helm charts from chart repositories
  • Inspect, upgrade and delete Helm-based applications installed in the cluster
  • Add custom and private chart repositories (supports ChartMuseum and JFrog Artifactory)
  • Browse and provision external services from the Service Catalog and available Service Brokers
  • Connect Helm-based applications to external services with Service Catalog Bindings
  • Secure authentication and authorization based on Kubernetes Role-Based Access Control

Assumptions/Pre-reqs

Before we get started, I wanted to lay out some assumptions and pre-reqs regarding the environment I’m using to support this Kubeapps deployment. First, some info about the infrastructure I’m using to support my kubernetes cluster:

  • vSphere 6.7u2
  • NSX-T 2.4
  • Enterprise PKS 1.4.1
  • vSphere Cloud Provider configured for persistent storage
  • A wildcard DNS entry to support your app ingress strategy

I’m also making the assumption that you have Helm installed on your kubernetes cluster as well. Helm is a package manager for kubernetes. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. Kubeapps uses Helm charts to deploy application stacks to kubernetes clusters so Helm must be deployed in the cluster prior to deploying Kubeapps. In this tutorial, we’re actually going to deploy kubeapps via the helm chart as well!

Finally, in order for Kubeapps to be able to deploy applications into the cluster, we will need to create a couple of Kubernetes RBAC resources. First, we’ll create a serviceaccount (called kubeapps-operator) and attach a clusterrole to the serviceaccount via a clusterrolebinding to allow the service account to deploy apps in the cluster. For the sake of simplicity, we are going to assign this service account cluster-admin privileges. This means the kubeapps-operator service account has the highest level of access to the kubernetes cluster. This is NOT recommended in production environments. I’ll be publishing a follow-up post on best practices for deploying Helm and Kubeapps in a production environment soon. Stay tuned!

Preparing the Cluster for a Kubeapps Deployment

This first thing we’ll want to do is add the Bitnami repo to our Helm configuration, as the Bitnami repo houses the Kubeapps Helm chart:

$ helm repo add bitnami https://charts.bitnami.com/bitnami

Now that we’ve added the repo, let’s create a namespace for our Kubeapps deployment to live in:

$ kubectl create ns kubeapps

Now we’re ready to create our serviceaccount and attach our clusterole to it:

$ kubectl create serviceaccount kubeapps-operator 
$ kubectl create clusterrolebinding kubeapps-operator \
--clusterrole=cluster-admin \
--serviceaccount=default:kubeapps-operator

Let’s use Helm to deploy our Kubeapps application!!

helm install --name kubeapps --namespace kubeapps bitnami/kubeapps \
--set mongodb.securityContext.enabled=false \
--set mongodb.mongodbEnableIPv6=false

Note, we could opt to set frontend.service.type=LoadBalancer if we wanted to utilize the Enterprise PKS/NSX-T integration to expose the dashboard via a dedicated IP but since we’re going to use an Ingress controller (also provided by NSX-T), we’ll leave that option out.

After a minute or two, we can check what was deployed via the Kubeapps Helm chart and ensure all the pods are available:

$ kubectl get all -n kubeapps

Exposing the Kubeapps Dashboard via FQDN

Our pods and services are now available, but we haven’t exposed the dashboard for access from outside of the cluster yet. For that, we need to create an ingress resource. If you review the output from the screenshot above, the kubeapps service, of type ClusterIP, is serving out our dashboard on port 80. The kubernetes service type of ClusterIP only exposes our service internally within the cluster so we’ll need to create an ingress resource that targets this service on port 80 so we can expose the dashboard to external users.

Part of the Enterprise PKS and VMware NSX-T integration provides an ingress controller per kubernetes cluster provisioned. This ingress controller is actually an L7 Load Balancer in NSX-T primitives. Any time we create an ingress service type in our Enterprise PKS kubernetes cluster, NSX-T automatically creates an entry in the L7 load balancer to redirect traffic, based on hostname, to the correct services/pods in the cluster.

As mentioned in the Pre-reps section, I’ve got a wildcard DNS entry that redirects *.prod.example.com to the IP address of the NSX-T L7 Load Balancer. This will allows my developers to use the native kubernetes ingress services to define the hostname of their applications without having to work with me or my infrastructure team to manually update DNS records every time they want to expose an application to the public.

Enough talk, let’s deploy our ingress controller! I’ve used the .yaml file below to expose my Kubeapps dashboard at kubeapps.prod.example.com:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubeapps-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: kubeapps.prod.example.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: kubeapps 
          servicePort: 80

As we can see, we are telling the Ingress service to target the kubeapps service on port 80 to “proxy” the dashboard to the public. Now let’s create that ingress resource:

$ kubectl create -f kubeapps-ingress.yaml -n kubeapps

And review the service to get our hostname and confirm IP address of the NSX-T L7 Load Balancer:

$ kubectl get ing -n kubeapps
NAME               HOSTS                       ADDRESS                     PORTS   AGE
kubeapps-ingress   kubeapps.prod.example.com   10.96.59.106,100.64.32.27   80      96m

Note, the 10.96.59.106 address is the IP of the NSX-T Load Balancer, which is where my DNS wildcard is directing requests to, and the HOSTS entry is the hostname our Kubeapps dashboard should be accessible on. So let’s check it out!

Now we’re ready to deploy applications in our kubernetes cluster with the click of a button!!

Behind the Scenes with NSX-T

So let’s have a look at what’s actually happening in NSX-T and how we can cross reference this with what’s going on with our Kubernetes resources. As I mentioned earlier, any time an Enterprise PKS cluster is provisioned, two NSX-T Load Balancers are created automatically:

  • An L4 load balancer that fronts the kubernetes master(s) to expose the kubernetes API to external users
  • An L7 load balancer that acts as the ingress controller for the cluster

So, we’ve created an ingress resource for our Kubeapps dashboard, let’s look at what’s happening in the NSX-T manager.

So let’s navigate to the NSX-T manager, login with our admin credentials and navigate to the Advanced Networking and Security tab. Navigate to Load Balancing and choose the Server Pools tab on the right side of the UI. I’ve queried the PKS API to get the UUID for my cluster (1cd1818c...), which corresponds with the LB we want to inspect (Note: you’ll see two LB entries for the UUID mentioned, one for kubernetes API, the other for the ingress controller):

Select the Load Balancer in question and then select the Pool Members option on the right side of the UI:

This will show us two kubernetes pods and their internal IP addresses. Let’s go back to the CLI and compare this with what we see in the cluster:

$ kubectl get pods -l app=kubeapps -o wide -n kubeapps
NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE                                   
kubeapps-7cd9986dfd-7ghff   1/1     Running   0          124m   172.16.17.6   0faf789a-18db-4b3f-a91a-a9e0b213f310
kubeapps-7cd9986dfd-mwk6j   1/1     Running   0          124m   172.16.17.7   8aa79ec7-b484-4451-aea8-cb5cf2020ab0

So this confirms that our 2 pods serving out our Kubeapps dashboard are being fronted by our L7 Load Balancer in NSX-T.

Conclusion

I know that was a lot to take in but I wanted to make sure to review what the actions we performed in this post:

  • Created a serviceaccount and clusterrolebinding to allow Kubeapps to deploy apps
  • Deployed our Kubeapps application via a Helm Chart
  • Exposed the Kubeapps dashboard for external access via our NSX-T “ingress controller”
  • Verified that Enterprise PKS and NSX-T worked together to automate the creation of all of these network resources to support our applications

As I mentioned above, stay tuned for a follow up post that will detail security implications for deploying Helm and Kubeapps in Production environments. Thanks for reading!!!

Creating a virtualenv with Python 3.7.3

As I’ve mentioned in recent posts, VMware’s Container Service Extension 2.0 (CSE) has recently been released. The big news around the 2.0 release is the ability to provision Enterprise PKS clusters via CSE.

It’s important to note that CSE 2.0 has a dependency on Python 3.7.3 or later. I had some trouble managed different versions of Python3 on the CentOS host I used to support the CSE server component. I wanted to document my steps in creating a virtual environment via virtualenv utilizing Python 3.7.3 and installing CSE Server 2.0 within the virtual environment.

virtualenv is a tool to create isolated Python environments. virtualenv creates a folder which contains all the necessary executables to use the packages that a Python project would need. This is useful in my situation as I had various versions of Python 3 installed on my CentOS server and I wanted to ensure Python 3.7.3 was being utilized exclusively for the CSE installation while not effecting other services running on the server utilizing Python3.

Installing Python 3.7.3 on CentOS

The first thing we need to do is install (and compile) Python 3.7.3 on our CentOS server.

We’ll need some development packages and the GCC compiler installed on the server:

# yum install -y zlib-devel gcc openssl-devel bzip2-devel libffi-devel

Next, we’ll pull down the Python 3.7.3 bits from the official Python site and unpack the archive:

# cd /usr/src
# wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tgz
# tar xzf Python-3.7.3.tgz
# cd Python-3.7.3

At this point we need to compile the Python source code on our system. We’ll use altinstall as not to replace the system’s default python binary located at /usr/bin/python:

# ./configure --enable-optimizations
# make altinstall

Now that we’ve compiled our new version of Python, we can clean up the archive file and check our python3.7 version to ensure we compiled our source code correctly:

# rm /usr/src/Python-3.7.3.tgz
# python3.7 -V
Python 3.7.3

Finally, we need to use pip to install the virtualenv tool on our server:

# pip3.7 install virtualenv

Creating our virtualenv

Now we’re ready to create our virtual environment within which to install CSE 2.0 server. First, let’s create a user that we’ll utilize to deploy the CSE server within the virtual environment. We can create the user and then switch to that user’s profile:

# useradd cse
# su - cse

Now we need to create a directory that will contain our virtual environment. In this example, I used the cse-env directory to house my virtual environment:

$ mkdir ~/cse-env

Now we need to create our virtual environment for our Python 3.7.3 project:

$ python3.7 -m virtualenv cse-env
Using base prefix '/usr/local'
New python executable in /home/cse/cse-env/bin/python3.7
Also creating executable in /home/cse/cse-env/bin/python
Installing setuptools, pip, wheel...
done.

Before you can start installing or using packages in the virtual environment, we’ll need to activate it. Activating a virtual environment will put the virtual environment-specific python and pip executables into your shell’s PATH. Run the following command to activate your virtual environment:

$ source ~/cse-env/bin/activate

Now check the default python version within the environment to verify we are using 3.7.3:

$ python -V
Python 3.7.3
$ pip -V
pip 19.1.1 from /home/cse/cse-env/lib/python3.7/site-packages/pip (python 3.7)

Now we’re ready to install the CSE server and we won’t have to worry about Python version conflicts as we are installing the CSE packages within our virtual environment, which will only utilize Python 3.7.3.

Stay tuned for my next post which will walk through an installation of Container Service Extension server!!

Creating a PvDC for Enterprise PKS in vCloud Director

If you read up on my recent blog post regarding RBAC in the new release of VMware’s Container Service Extension for vCloud Director, you may have noticed that I mentioned a follow-up post regarding the steps required to add an Enterprise PKS controlled vCenter Server to vCloud Director. I wanted to take a little bit of time to go through that process as it’s a relatively new workflow.

First of all, in our lab deployment, we are using an NSX-T backed vSphere environment to provide networking functionality to the Enterprise PKS deployment. As you may know, NSX-T integration is fairly new in the vCloud Director world (and growing every day!). With this in mind, the process of adding the vSphere/NSX-T components into vCD are a little bit different. Let’s have a look at the workflow for creating a Provider Virtual Datacenter (PvDC) that will support our tenant using CSE to provision Enterprise PKS kubernetes clusters.

Logging into the HTML5 vCloud Director Admin Portal

The first point to note is that we can only add a vSphere environment backed by NSX-T in the HTML5 admin portal in the current release of vCD (9.7 at the time of writing). Let’s navigate to https://vcd-director-url.com/provider and login:

Adding vCenter Server

First, we need to add our vCenter Server (vCSA) that is managed by Enterprise PKS to our vCD environment. Select the menu at the top of the page and select the vSphere Resources option and select the Add option above your list of existing vCSAs:

Next, we will fill out all of the required information vCD requires to connect to our vCSA. After filling out the required information, select Next:

On the NSX-V Manager section, we want to ensure that we disable the Configure Settings option here as we will be utilizing a vSphere environment backed by NSX-T, as opposed to NSX-V. After disabling the NSX-V setting, select Next:

Finally, review the configuration information and select Finish to add the vCSA to your vCD deployment:

Add NSX-T Manager

Now that we’ve adding our vCSA sans NSX-V manager, we need to add our NSX-T manager to our vCD deployment. Select the NSX-T Managers menu from the left side of the portal and then select the Add option to plug our NSX-T Manager information in:

Once we fill out the required information, we can select the Save button to finish the process:

Once we verified the action is successful in the Task menu, we are ready to create our PvDC!

Creating a PvDC with our PKS vCSA and NSX-T Manager

Normally, we would be able to create PvDCs in the WebUI but for PvDCs that are backed by NSX-T, we can only create them via the API. We will use the vcd-cli to accomplish this. First, we need to log in to the as a cloud admin user

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

Now, we use the following command to create our new PvDC where:

"PKS-PVDC" is the name of our new PvDC • "ent-cse-vcsa" is the name of our newly added vCSA • "pks-nsx-t-mgr" is the name of our newly added NSX-T manager • "*" is our storage profile • "pks-cluster" is our resource pool • "--enable" to ensure the PvDC is enabled upon creation

vcd pvdc create PKS-PVDC ent-cse-vcsa -t pks-nsx-t-mgr -s "*" -r pks-cluster -–enable

Now, let’s navigate back to the portal to ensure the PvDC is present and enabled. Select the Cloud Resources options from the top menu and the Provider VDCs option from the left menu:

Create our Organization and Organization Virtual Datacenters

Now that we’ve built our PvDC out, we are ready to create our tenant org and create a virtual datacenter for that tenant to utilize for their Enterprise PKS workloads.

First, navigate to the Organizations option on the left menu and select the Add option above the list of orgs:

Fill out the required information to create the org and select the Create button:

We now need to create an Organization Virtual Datacenter (OvDC) to support our org. Select the Organization VDC option from the left menu and select the New button:

I won’t walk through the options here as it’s well documented but you will need to define your Organization, PvDC, Allocation Model, Allocation Pool, Storage Policies, and Network Pool so users in your tenant org have resources to use when provisioning.

At this point, we have done all the pre-work required and we’re ready to connect this OrgVDC to our Container Service Extension instance and start provisioning our Enterprise PKS clusters in vCD!!

Implementing RBAC with VMware’s Container Service Extension 2.0 for vCloud Director

In case you haven’t heard, VMware recently announced the general availability of the Container Service Extension 2.0 release for vCloud Director. The biggest addition of functionality in the 2.0 release is the ability to use CSE to deploy Enterprise PKS clusters via the vcd-cli tool in addition to native, upstream Kubernetes clusters. I’ll be adding a blog post shortly on the process required for enabling your vCD environment to support Enterprise PKS deployments via the Container Service Extension.

Today, we are going to talk about utilizing the RBAC functionality introduced in CSE 1.2.6 to assign different permissions to our tenants to allow them to deploy Enterprise PKS (CSE Enterprise) clusters and/or native Kubernetes clusters (CSE Native). The cloud admin will be responsible for enabling and configuring the CSE service and enabling tenant admin/users to deploy CSE Enterprise or CSE Native clusters in their virtual datacenter(s).

Prerequisites

  • The CSE 2.0 server is installed and configured to serve up native Kubernetes clusters AND Enterprise PKS clusters. Please refer to the CSE documentation for more information on this process.
  • Must have at least two organizations present and configured in vCD. In this example, I’ll be utilizing the following orgs:
    • cse-native-org (native k8 provider)
    • cse-ent-org (PKS Enterprise k8 provider)
  • This example also assumes none of the organizations have been enabled for k8 providers up to this point. We will be starting from scratch!

Before We Begin

As noted above, this example assumes we have CSE 2.0 installed already in our environment, but I wanted to take some time to call out the process for enabling RBAC in CSE. When installing CSE, all we need to do to enable RBAC is ensure the enforce_authorization is set to true in the service section of the config.yaml file:

…output omitted…

service:
  enforce_authorization: true
  listeners: 5

…output omitted…

Please note, if we set the flag to false, any user with the ability to create compute resources via vCD will also be able to provision k8 clusters.

Enabling the “cse-native-org” Organization

The first thing we’ll need to do is grant access to the “cse-native-org” to perform CSE Native operations. We’ll first need to login to the vCD instance using the vcd-cli command with a system admin user, then we can add the right to the org.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

Now we can grant the org “cse-native-org” the right to deploy native k8 clusters:

$ vcd right add -o ‘cse-native-org’ "{cse}:CSE NATIVE DEPLOY RIGHT"

At this point, we have enabled the tenant with the ability to provision clusters but is that enough? What happens when we log in and attempt to provision a cluster with a user who belongs to that tenant? We’ll run the create cluster command where test-cluster is the name we assign to our cluster and nodes is the number of worker nodes we’d like to deploy:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights.

Here we see the RBAC feature in action! Because we haven’t added the { cse}:CSE NATIVE DEPLOY RIGHT} right to the role associated with the user, they aren’t allowed to provision k8 clusters. NOTE: If RBAC is not enabled, any user in the org will be able to use CSE to deploy clusters for the cluster type their org is enabled for.

So let’s log back in as the administrator and give our tenant admin user the ability to provision k8 clusters. We have created a role in vCD for this user that mimics the “Organization Admin” permission set and named it cse-admin. The cse-native-admin user has been created with the cse-admin role.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd user create ‘cse-native-admin’ ‘password’ ‘cse-admin’

$ vcd role add-right 'cse-admin' "{cse}:CSE NATIVE DEPLOY RIGHT"

Finally, we need to enable the tenant to support native k8 cluster deployments:

$ vcd cse ovdc enable native-ovdc -o cse-native-org -k native
metadataUpdate: Updating metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664)
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664), result: success

Now that we have given our user the right to create clusters, let’s give the cluster create command another try:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
create_cluster: Creating cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating cluster vApp test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating master node for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Initializing cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating 1 node(s) for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Adding 1 node(s) to test-cluster(7f509a1c-4743-407d-95d3-355883191313)
task: 3de7f52f-e018-4332-9731-a5fc99bde8f8, Created cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313), result: success

Success!! Our user was now able to provision their cluster!! Now we can get some information about the provisioned k8 cluster and grab our k8 cluster config so we can access our new cluster with kubectl:

$ vcd cse cluster info test-cluster
property         value
---------------  -------------------------------------------------------------------------------
cluster_id       7f509a1c-4743-407d-95d3-355883191313
cse_version      2.0.0
leader_endpoint  10.10.10.210
master_nodes     {'name': 'mstr-4su4', 'ipAddress': '10.10.10.210'}
name             test-cluster
nfs_nodes
nodes            {'name': 'node-utcz', 'ipAddress': '10.10.10.211'}
number_of_vms    2
status           POWERED_ON
template         photon-v2
vapp_href        https://vcd.example.com/api/vApp/vapp-065141f8-4c5b-47b5-abee-c89cb504773b
vapp_id          065141f8-4c5b-47b5-abee-c89cb504773b
vdc_href         https:// vcd.example.com/api/vdc/f703babd-8d95-4e37-bbc2-864261f67d51
vdc_name         native_ovdc

$ vcd cse cluster config test-cluster > ~/.kube/config

$ kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
mstr-4su4   Ready    master   1d    v1.10.11
node-utcz   Ready    <none>   1d    v1.10.11

Now we’re ready to provision our first Kubernetes app!

Enabling the “cse-ent-org” Organization

Now, our cloud admin has received a request from another tenant (cse-ent-org) that they would like users in their org to be able to provision Enterprise PKS clusters. Our cloud admin will follow the same workflow documented in the previous example but substitute the “CSE Enterprise” rights for the “CSE Native” rights.

Let’s take a look at what happens if a user in the cse-ent-org tries to login and provision a cluster before our cloud admin has enabled the right to do so:

vcd login vcd.example.com cse-ent-org cse-ent-user -iw
Password:
cse-ent-user logged in, org: 'cse-ent-org', vdc: 'pks-ovdc'

$ vcd cse cluster create test-cluster
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Org VDC is not enabled for Kubernetes cluster deployment

As expected, this errors out because our cloud admin has not enabled the right to deploy k8 clusters in the org. Now, our cloud admin will login and enable the right to deploy Enterprise PKS clusters via CSE in the cse-ent-org tenant:

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd right add "{cse}:PKS DEPLOY RIGHT" -o cse-ent-org
Rights added to the Org 'cse-ent-org'

Just as in the previous example, we need to create a user and a role that will allow our user to provision k8 clusters in this org. We have created a custom role in this example that mimics the vAPP Author permissions and named it pks-k8-role. The role has been assigned to the user that needs to create k8 clusters. Then, we need to give that user role the right to deploy Enterprise PKS clusters:

$ vcd user create ‘cse-ent-user’ ‘password’ ‘pks-k8-role’

$ vcd role add-right "pks-k8-role" "{cse}:PKS DEPLOY RIGHT"

The user in the tenant org has been granted rights by the cloud admin, now we need to enable the org for to allow deployment of Enterprise PKS clusters:

$ vcd cse ovdc enable pks-ovdc -o cse-ent-org -k ent-pks -p "small" -d "test.local" 

metadataUpdate: Updating metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664) 
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664), result: success

Note: When enabling an org for Enterprise PKS, we need to define the plan and the domain to be assigned to the instances (and load-balancer) that PKS will provision. Currently, you can only enable one plan per org, but you can run the above command again with a different plan if you’d like to switch in the future.

It’s also worth mentioning that you can create separate OrgVDCs within the same org and enable one OVDC for Native K8 and the other for Enterprise PKS if users in the same tenant org have different requirements.

Finally, we are ready to provision our PKS cluster. We’ll login as our cse-ent-user and deploy our cluster:

$ vcd login vcd.example.com cse-ent-org cse-ent-user -iw 
Password: 
cse-ent-admin logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks
property                     value
---------------------------  --------------------------------------------------------
compute_profile_name         cp--41e132c6-4480-48b1-a075-31f39b968a50--cse-ent-ovdc-1
kubernetes_master_host       test-cluster-pks.test.local
kubernetes_master_ips        In Progress
kubernetes_master_port       8443
kubernetes_worker_instances  2
last_action                  CREATE
last_action_description      Creating cluster
last_action_state            in progress
name                         test-cluster-pks
pks_cluster_name             test-cluster-pks---5d33175a-3010-425b-aabe-bddbbb689b7e
worker_haproxy_ip_addresses

We can continue to monitor the status of our cluster create with the cluster list or cluster info commands:

$ vcd cse cluster list 

k8s_provider    name              status            vdc
--------------  ----------------  ----------------  --------------
ent-pks         test-cluster-pks  create succeeded  pks-ovdc

Now that we have verified out cluster has been created successfully, we need to obtain the config file so we can access the cluster with kubectl:

$ vcd cse cluster config test-cluster > ~/.kube/config

Now our user is ready to deploy apps on their PKS cluster!

As a final test, let’s see what happens when a user in the same org that we just enabled for Enterprise PKS (cse-ent-org) tries to provision a cluster. This user (vapp-user) has been assigned the “vApp Author” role as it exists “out of the box.”

$ vcd login vcd.example.com cse-ent-org vapp-user -iw 
Password: 
Vapp-user logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks-2 
Usage: vcd cse cluster create [OPTIONS] NAME 
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights. 

There we have it, RBAC in full effect!! The user can not provision a cluster, even though the org is enabled for Enterprise PKS cluster creation, because their assigned role does have the rights to do so.

Conclusion

This was a quick overview of the capabilities provided by the Role Based Access Control functionality present in VMware’s Container Service Extenstion 2.0 for vCloud Director. We were able to allow users in orgs to provision k8 clusters of both the native and Enterprise PKS variants. We also showcased how we can prevent “unprivileged” users in the same org from provisioning k8 clusters as well. Hope you found it useful!!

Deploying VMware vCloud Director on a Single Virtual Machine with a Single Network Interface

Recently, while testing the new Container Service Extension 2.0 Beta release, I found myself needing a quick (and easily replicable) instantiation of vCloud Director in my lab environment. Being this needed to be deployed in my lab environment, I wanted to do this while using the least amount of resources and virtual machines possible to keep things simple. I decided to deploy a single CentOS virtual machine that housed the postgresdb, rabbitmq server (for my subsequent deployment of CSE), and the actual vCD server itself. I also decided to deploy using a single network interface to keep things simple.

Before we get started, I want to lay out some assumptions I’ve made in this environment that will need to be taken in consideration if you’d like to replicate this deployment as documented:

  • All of my servers hostnames are resolvable (I’m using dnsmasq to easily provide DNS/dhcp support in my lab)

  • I’ve disabled firewalld as well as this lab is completely isolated from outside traffic. This is NOT secure and NOT recommend for a production deployment. See the installation documentation for port requirements for vCD.

  • I’ve also persistently disabled SElinux. Again, this is NOT secure and NOT recommending for production but just wanted one less thing to troubleshoot barring config issues.

  • I’ve configured an NTP server in my lab that all the servers connect to. NTP is a requirement for vCD installation.

  • I am going to use the tooling provided by vCD to create self-signed SSL certs for use with vCD. Again, this is NOT secure and NOT recommending for production, but better suited for quick test deployments in a controlled lab environment.

I’ve configured a CentOS 7.6 server with 4 vCPU, 8GB of memory and a 20GB hard drive. After installation of my OS, I verify the configuration stated above and update my server to the latest and greatest:

yum update -y

Installing PostgreSQL

At this point, we are ready to install and configure our PostgreSQL database (note: vCD requires PostgreSQL 10).

First, we’ll need to configure our server to have access to the PostgreSQL repo:

# rpm -Uvh https://yum.postgresql.org/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

Now that we have configured the repo, we need to install the PostgreSQL 10 packages:

# yum install -y postgresql10-server postgresql10

Now that the database packages are installed, we need to initialize the database, start the service, and ensure it starts automatically at boot:

# /usr/pgsql-10/bin/postgresql-10-setup initdb
# systemctl start postgresql-10.service
# systemctl enable postgresql-10.service

Now that Postgres is installed, let’s verify the installation by logging in to the database with the “postgres” user (created during installation) and set the password:

# su - postgres -c "psql"

psql (10.0)
Type "help" for help.
postgres=# \password postgres
**enter pw at prompt**
postgres=# \q

We can run the createuser command as the postgres OS user to create the vcloud postgres user:

# su - postgres
-bash-4.2$ createuser vcloud --pwprompt

Log back into the psql prompt to create the database the vCD instance will utilize (vcloud), as well as setting the vcloud user password:

-bash-4.2$ psql
postgres=# create database vcloud owner vcloud;
CREATE DATABASE
postgres=# alter user vcloud password ‘your-password’;
ALTER ROLE

Next, we’ll need allow our vcloud user to login to the database:

postgres=# alter role vcloud with login;
ALTER ROLE
postgres=# \q

Finally, we need to allow logins to the Postgres DB with a username/pw combination. Since I’m deploying this in a controlled lab environment, I’m going to open connections up to all IP addresses. Add the following lines to the bottom of the ~/10/data/pg_hba.conf file (editing as the postgres user):

-bash-4.2$ vi ~/10/data/pg_hba.conf

host all all 0.0.0.0/0 md5

We also need to ensure that the database is listening for connections. Edit the postgresql.conf file and ensure the following line is not commented out and change ‘localhost’ to ‘*’:

-bash-4.2$ vi 10/data/postgresql.conf

listen_addresses = '*'

Now that we’ve made these changes, return to the root user and restart the psql service:

-bash-4.2$ exit
# systemctl restart postgresql-10

Installing RabbitMQ

Now that we’ve got our PostgreSQL DB configured, we need to configure RabbitMQ on the server. AMQP, the Advanced Message Queuing Protocol, is an open standard for message queuing that supports flexible messaging for enterprise systems. vCloud Director uses the RabbitMQ AMQP broker to provide the message bus used by extension services, object extensions, and notifications.

On our CentOS install, we need to configure access to the EPEL repo, which provides packages and dependencies we’ll need to install RabbitMQ. After configuring the repo, we need to install Erlang, which is the language RabbitMQ is written in:

# yum -y install epel-release
# yum -y install erlang socat

For linux installs, RabbitMQ provides an RPM which is precompiled and can be installed directly on the server (once ‘erlang’ in installed). Download and install RabbitMQ via the commands below:

# wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.10/rabbitmq-server-3.6.10-1.el7.noarch.rpm
# rpm -Uvh rabbitmq-server-3.6.10-1.el7.noarch.rpm

Now that we have installed RabbitMQ on the server, we are ready to start the RabbitMQ server, ensure it automatically starts on boot, and verify that status of the service:

# systemctl start rabbitmq-server
# systemctl enable rabbitmq-server
# systemctl status rabbitmq-server

Once we’ve verified the status of the RabbitMQ service is “active,” we need to set up an admin user (I’ve used admin in the case, but you can configure any username you’d like) to allow connections to the queue from vCD:

# rabbitmq-plugins enable rabbitmq_management
**output omitted**
# chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/
# rabbitmqctl add_user admin **your-password**
Creating user "admin"
# rabbitmqctl set_user_tags admin administrator
Setting tags for user "admin" to [administrator]
# rabbitmqctl set_permissions -p / admin ".*" ".*" ".*"
Setting permissions for user "admin" in vhost "/"

Installing vCloud Director

We’ve got PostgreSQL and RabbitMQ configured on the server, now we are ready to pull down and install the vCD binary. I’ve pulled the vCD install package directly from MyVMware down to my local desktop and copied the file over to my vCD server at /vcloud.bin and modified permissions so I can execute the script. Before we run the script, we need to install a couple of dependencies the script requires to run to completion:

# yum install libXdmcp libXtst redhat-lsb -y

Now we are ready to run the installation script. After the script finishes, decline the option to run the configure script as we will do this manually later:

# chmod u+x /vcloud.bin
# ./vcloud.bin

**output omitted**

Would you like to run the script now? (y/n)? N

Now that we’ve installed the vCD packages, we can use the tooling provided to generate self-signed certificates. If you have existing certs or you’d like to create and sign your own certs, please refer to the installation documentation for the proper prodecure to create signed certs or upload existing certs. The following command creates certificates for the http and console proxy services and stores them in a keystore file at /tmp/cell.ks with a password of mypassword

# cd /opt/vmware/vcloud-director/bin
# ./cell-management-tool generate-certs -j -p -o /tmp/cell.ks -w mypassword

We can verify the keystore contains 2 keys with the following command:

# /opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS \
-storepass mypassword -keystore /tmp/cell.ks -list
**output omitted**

consoleproxy, May 6, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA1): 7B:FB...
http, May 6, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA1): 14:DD…

Configuring vCloud Director

Now that we have created our certs, we are ready to configure the vCD server. Since we are using the same interface for http and console proxy, we need to perform an unattended install and define ports for each service. For details on this process, see the installation documentation section for unattended installations. As an example, the following command configures both http and console proxy on the same IP (10.10.10.100), using the default port 443 for secure http access while using 8443 for secure console access. We also define the keystore, created earlier as well as the password for that keystore.

First, let’s change directory into the location of the configure script:

# cd /opt/vmware/vcloud-director/bin

Now we are ready to run the configure command:

# ./configure -ip 10.10.10.100 -cons 10.10.10.100 --primary-port-http 80 \
--console-proxy-port-https 8443 -dbtype postgres \
-dbhost 10.10.10.100 -dbname vcloud -dbuser vcloud \
-dbpassword **db-password** -k /tmp/cell.ks -w mypassword \
--enable-ceip false -unattended
......................................../
Database configuration complete.

We can view the logs for the configuration attempt in the directory /opt/vmware/vcloud-director/logs/ at the configure-timestamp location:

# cd /opt/vmware/vcloud-director/logs/
# less configure-timestamp

**outpit omitted**

vCloud Director configuration is now complete.
Once the vCloud Director server has been started you will be able to
access the first-time setup wizard at this URL:
https://FQDN

Before starting the vCD service, we’ll also need to configure a system administrator user using the cell-management-tool. This will allow us to log into the vCloud Director admin portal and being our vCD configuration (you’ll also be asked to specific a password for the system admin user after running the cell-management-tool command):

# cd /opt/vmware/vcloud-director/bin
# ./cell-management-tool system-setup --user admin --full-name "VCD System Administrator" \
--email vcd-admin@example.com --system-name VCD --installation-id 1

where --user is our admin user name, --system-name is the name that is used to create a vCenter folder in each vCenter Server with which it registers, and --installation-id is the numerical id of the specific instance of VCD. For more information on using the cell-management-tool to configure the system admin user, please refer to the VMware documentation.

At this point, we are ready to start the vCD service:

# service vmware-vcd start    

After confirming the service has started, navigate to https://FQDN to begin your vCD configuration!!

Enjoy!!