Using Harbor and Kubeapps to Serve Custom Helm Charts

In my last post, I walked through the process of deploying Kubeapps in an Enterprise PKS Kubernetes cluster. In this post, I wanted to examine the workflow required for utilizing Harbor, an open source cloud native registry, as an option to serve out a curated set of Helm charts to developers in an organization. We’ll walk through a couple of scenarios, including configuring a “private” project in Harbor that houses Helm charts and container images for a specific group of developers. Building on my last post, we’ll also add this new Helm chart repository into our Kubeapps deployment to allow our developers to deploy our curated applications directly from the Kubeapps dashboard.

Harbor is an an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing. Enterprise support for Harbor Container Registry is included with VMware Enterprise PKS.

Along with the ability to host container images, Harbor also recently added functionality to act as a Helm chart repository. Harbor admins create “projects” that are normally dedicated to certain teams or environments. These projects, public or private, house container images as well as Helm charts to allow our developers to easily deploy curated applications in their Kubernetes cluster(s).

We already have Harbor deployed in our environment as an OpsMan tile. For more information on installing Harbor in conjunction with Enterprise PKS, see documentation here. For instructions detailing the Harbor installation procedure outside of an Enterprise PKS deployment, see the community documentation here.

Let’s get started!!

Creating a Private Project in Harbor

The first thing we’ll need to do is create a new private project that we’ll use to store our container images and Helm charts for our group of developers.

Navigate to the Harbor web UI and login with the admin credentials defined on install. Once logged in, select the + New Project button above the list of existing projects:

Name the project (developers-private-project in our case) and leave the Public option unchecked, as we only want our specific developer group to have access to this project:

Select the newly created project from the list and note the different menus we have available to us regarding the project, including Repositories, which will house our container images, as well as Helm Charts, which will house our Helm charts. We can also add individual members to the project to allow them to authenticate to the project with a username/password combination when pulling/pushing images or Helm charts to the project. For now, let’s select the Configuration tab and select the Automatically scan images on push option. This will instruct Harbor to scan container images for possible CVEs when they are uploaded to the project. Select Save:

Now that we’ve configured our private project, we need to upload our container image that will serve as the basis for our app.

Upload Image to Private Harbor Project

Now that we’ve created our project, we need to populate the project with the container image we are going to use to power this application.

In this example, we are using a simple “To Do List” application. Additional details on the application can be found here.

You’ll need access to a server with docker installed to perform this workflow. I am using the same Linux server where my Helm client is installed.

First, pull the docker image from the public repository:

$ docker pull prydonius/todo

Verify the image has been pulled:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
prydonius/todo      latest              4089c4ba4620        24 months ago       107MB

Since the project is private, we need to use docker login to authenticate against Harbor. Use the Harbor admin user credentials to authenticate:

$ docker login harbor.pks.zpod.io

Login Succeeded

Now we can tag the todo image with the Harbor url as well as the repo name and tag (which we define, v1 in this case) and push it to our private registry:

$ docker tag prydonius/todo:latest harbor.pks.zpod.io/developers-private-project/todo:v1


$ docker push harbor.pks.zpod.io/developers-private-project/todo:v1

Let’s head over to the Harbor web UI and ensure our image has been successfully uploaded. Navigate to the Projects tab in the left hand menu, select the developers-private-project project, and ensure the todo image is present:

While we here, let’s click on the link for the image and examine the vulnerabilities:

As we selected the option to scan all images on push, our todo container was automatically scanned when it was uploaded. There are a couple of vulnerabilities of “High” severity that we’d want to examine before pushing this app to production. Harbor also provides the ability to set rules in the configuration for each project to ensure containers with known vulnerabilities are not deployed in clusters. Our development environment is not exposed outside of our datacenter so we can let this slide…for now.

Now that we’ve uploaded our container image, we are ready to build our custom Helm chart that will utilize this image in our Harbor repository to build the application in our Kubernetes cluster.

Creating our Custom Helm Chart

As discussed in the last post, Helm uses charts, a collection of files that describe a related set of Kubernetes resources, to simplify the deployment of applications in a Kubernetes cluster. Today, we are going to build a simple Helm chart that deploys our todo app and exposes the app via a load balancer.

We’ll navigate to the server running the Helm client and issue the following command which will build out the scaffolding required for a Helm chart. We’ll call this chart dev-to-do-chart:

$ helm create dev-to-do-chart

The following directory structure will be created:

dev-to-do-chart
|-- Chart.yaml
|-- charts
|-- templates
|   |-- NOTES.txt
|   |-- _helpers.tpl
|   |-- deployment.yaml
|   |-- ingress.yaml
|   `-- service.yaml
`-- values.yaml

The templates/ directory is where Helm finds the YAML definitions for your Services, Deployments and other Kubernetes objects. We will define variables for our deployment in the values.yaml file. Values here can be dynamically set at deployment time to define things such as using an Ingress resource to expose the application or assigning persistent storage to the app.

Let’s edit the values.yaml file to add a couple of additional bits of information. We want to define the image that we will use to back our application deployment. We’ll use the todo container image that we just uploaded to our private project.

Also, since this project/repository is private, we need to create a Kubernetes secret that contains access information for the repository so Kubernetes (and docker) is allowed to pull the image. For additional information on this process, see the Kubernetes documentation here.

In our example, I have created the private-repo-sec secret that we will add to the values.yaml, along with the image name:

$ vi dev-to-do-chart/templates/values.yaml

---
image:
  repository: harbor.pks.zpod.io/developers-private-project/todo
  tag: v1
  pullPolicy: IfNotPresent
imagePullSecrets:
- name: private-repo-sec
___

This will instruct Helm to build a Kubernetes deployment that contains a pod comprised of our todo container from our developers-private-project repo and utilize the private-repo-sec secret to authenticate to the private project.

Let’s also create a README.md (in the dev-to-do-chart directory) file that will display information about the Helm chart that will be visible in our Kubeapps dashboard:

$ vi dev-to-do-chart/README.md 

___
This chart will deploy the "To Do" application. 

Set "Service" to type "LoadBalancer" in the values file to expose the application via an L4 NSX-T load balancer.
___

Now that we’ve configured our chart, we need to package it up so we can upload it to our Harbor chart repo to share with our developers. Navigate back to the parent directory and run the following command to package the chart:

$ helm package ./dev-to-do-chart
Successfully packaged chart and saved it to: /home/user/dev-to-do-chart-0.1.0.tgz

We’ve created and packaged our custom Helm chart for our developers, now we’re ready to upload the chart to Harbor so they can deploy the todo application!!

Uploading Custom Helm Chart to Harbor

There are two ways to upload a Helm chart to harbor:

  • Via the Harbor web UI
  • Via the Helm CLI tool

We are going to use the Helm CLI tool to push the chart to our private project. The first thing we’ll need to do is grab the ca.crt for our project which will allow us to add the chart repo from our Harbor project to our local Helm client.

Navigate back to the homepage for the developers-private-project and select the Registry Certificate link:

This will download the ca.crt that we can use in the following command to push our Helm chart to our project. Since the project is private, we will need to authenticate with the admin users credentials as well as the ca.crt when we add the repo to our Helm repo list:

Note: These commands should be run from the Linux server where the Helm client is installed.

helm repo add developers-private-project --ca-file=ca.crt --username=admin --password=<password> https://harbor.pks.zpod.io/chartrepo/developers-private-project

Let’s verify the repo was added to our Helm repo list:

$ helm repo list
NAME                        URL                                                                                               
developers-private-project  https://harbor.pks.zpod.io/chartrepo/developers-private-project

It should be noted, the native Helm CLI does not support pushing charts so we need to install the helm-push plugin:

$ helm plugin install https://github.com/chartmuseum/helm-push

Now we’re ready to push our chart to our Harbor project:

$ helm push --ca-file=ca.crt --username=admin --password=<password> dev-to-do-chart-0.1.0.tgz developers-private-project
Pushing dev-to-do-chart-0.1.0.tgz to developers-private-project...
Done.

Let’s update our helm repos and search for our chart via the Helm CLI to confirm it is available in our project’s chart repo:

$ helm repo update


$ helm search dev-to-do
NAME                                        CHART VERSION   APP VERSION DESCRIPTION                
developers-private-project/dev-to-do-chart  0.1.0           1.0         A Helm chart for Kubernetes
local/dev-to-do-chart                       0.1.0           1.0         A Helm chart for Kubernetes

Now let’s confirm we can see it in the Harbor web UI as well. Navigate back to the developers-private-project homepage and select the Helm Charts tab:

Awesome!! Now we’re finally ready to add our private chart repo into our Kubeapps deployment so our developers can deploy our to-do app via the Kubeapps dashboard.

Adding a Private Project Helm Chart Repo to Kubeapps

Now that we’ve created our private project, populated with our custom container image and helm chart, we are ready to add the Helm chart repo into our Kubeapps deployment so our developers can deploy the to-do application via the Kubeapps dashboard.

First, we to access our Kubeapps dashboard. Once we’ve authenticated with our token, hover over the Configuration button in the top right-hand corner and select the App Repositories option from the drop down:

Select the Add App Repository button and file in the required details. We are using basic authentication with the Harbor admin user’s credentials. We also will need to add our ca.crt file as well. When finished, select the Install Repo button:

If all the credentials have been populated correctly, we can click on the developers-private-project link and see our dev-to-do-cart Helm chart:

Now, our developers can log in to the Kubeapps dashboard, select the Catalog option, search for our dev-to-do-chart, click on the entry, and select the Deploy button on the subsequent browner page:

In order for our developers to expose this app to access outside of the Kubernetes cluster, we need to change the Service from ClusterIP to LoadBalancer:

Once they’ve made this change, they can select the Submit button to deploy the application in their Kubernetes cluster. The subsequent webpage will show us information about our deployment, including the URL (IP of the NSX-T load balancer that was automatically created, highlighted with a red box in the screenshot) as well as the current state of the deployment:

Note: The automatic creation of the LoadBalancer service is made possible by the integration between NSX-T and Enterprise PKS. These instructions will need to be augmented to provide this same functionality running on a different set of infrastructure.

Navigate to the IP address of the load balancer to test application access:

Boom!! There we have it, our application being served out via our NSX-T L4 load balancer resource.

Conclusion

In this post, we walked through the steps required to create a private Harbor project for our developers that will house custom container images and Helm charts as well as building a custom Helm chart and uploading our container image and custom Helm chart to that private project.

We also walked through the process of adding a private Helm chart repo, hosted by our Harbor deployment, in to our Kubeapps dashboard so our developers can deploy this custom application for testing in their Kubernetes clusters.

Deploying Kubeapps and Exposing the Dashboard via Ingress Controller in Enterprise PKS

In this post, I’d like to take some time to walk through the process of deploying Kubeapps in an Enterprise PKS kubernetes cluster. I’ll also walk through the process of utilizing the built-in ingress controller provided by NSX-T to expose the Kubeapps dashboard via a fully qualified domain name.

What is Kubeapps?

There’s been a lot of excitement in the Cloud Native space at VMware since the acquisition of Bitnami last year. The Bitnami team has done a lot of amazing work over the years to simplify the process of application deployment across all types of infrastructure, both in public and private clouds. Today we are going to take a look at Kubeapps. Kubeapps, an open source project developed by the folks at Bitnami, is a web-based UI for deploying and managing applications in Kubernetes clusters. Kubeapps allows users to:

  • Browse and deploy Helm charts from chart repositories
  • Inspect, upgrade and delete Helm-based applications installed in the cluster
  • Add custom and private chart repositories (supports ChartMuseum and JFrog Artifactory)
  • Browse and provision external services from the Service Catalog and available Service Brokers
  • Connect Helm-based applications to external services with Service Catalog Bindings
  • Secure authentication and authorization based on Kubernetes Role-Based Access Control

Assumptions/Pre-reqs

Before we get started, I wanted to lay out some assumptions and pre-reqs regarding the environment I’m using to support this Kubeapps deployment. First, some info about the infrastructure I’m using to support my kubernetes cluster:

  • vSphere 6.7u2
  • NSX-T 2.4
  • Enterprise PKS 1.4.1
  • vSphere Cloud Provider configured for persistent storage
  • A wildcard DNS entry to support your app ingress strategy

I’m also making the assumption that you have Helm installed on your kubernetes cluster as well. Helm is a package manager for kubernetes. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. Kubeapps uses Helm charts to deploy application stacks to kubernetes clusters so Helm must be deployed in the cluster prior to deploying Kubeapps. In this tutorial, we’re actually going to deploy kubeapps via the helm chart as well!

Finally, in order for Kubeapps to be able to deploy applications into the cluster, we will need to create a some kubernetes RBAC resources. First, we’ll create a serviceaccount (called kubeapps-operator) and attach a clusterrole to the serviceaccount via a clusterrolebinding to allow the service account to deploy apps in the cluster. For the sake of simplicity, we are going to assign this service account cluster-admin privileges. This means the kubeapps-operator service account has the highest level of access to the kubernetes cluster. This is NOT recommended in production environments. I’ll be publishing a follow-up post on best practices for deploying Helm and Kubeapps in a production environment soon. Stay tuned!

Preparing the Cluster for a Kubeapps Deployment

This first thing we’ll want to do is add the Bitnami repo to our Helm configuration, as the Bitnami repo houses the Kubeapps Helm chart:

$ helm repo add bitnami https://charts.bitnami.com/bitnami

Now that we’ve added the repo, let’s create a namespace for our Kubeapps deployment to live in:

$ kubectl create ns kubeapps

Now we’re ready to create our serviceaccount and attach our clusterole to it:

$ kubectl create serviceaccount kubeapps-operator 
$ kubectl create clusterrolebinding kubeapps-operator \
--clusterrole=cluster-admin \
--serviceaccount=default:kubeapps-operator

Let’s use Helm to deploy our Kubeapps application!!

helm install --name kubeapps --namespace kubeapps bitnami/kubeapps \
--set mongodb.securityContext.enabled=false \
--set mongodb.mongodbEnableIPv6=false

Note, we could opt to set frontend.service.type=LoadBalancer if we wanted to utilize the Enterprise PKS/NSX-T integration to expose the dashboard via a dedicated IP but since we’re going to use an Ingress controller (also provided by NSX-T), we’ll leave that option out.

After a minute or two, we can check what was deployed via the Kubeapps Helm chart and ensure all the pods are available:

$ kubectl get all -n kubeapps

Exposing the Kubeapps Dashboard via FQDN

Our pods and services are now available, but we haven’t exposed the dashboard for access from outside of the cluster yet. For that, we need to create an ingress resource. If you review the output from the screenshot above, the kubeapps service, of type ClusterIP, is serving out our dashboard on port 80. The kubernetes service type of ClusterIP only exposes our service internally within the cluster so we’ll need to create an ingress resource that targets this service on port 80 so we can expose the dashboard to external users.

Part of the Enterprise PKS and VMware NSX-T integration provides an ingress controller per kubernetes cluster provisioned. This ingress controller is actually an L7 Load Balancer in NSX-T primitives. Any time we create an ingress service type in our Enterprise PKS kubernetes cluster, NSX-T automatically creates an entry in the L7 load balancer to redirect traffic, based on hostname, to the correct services/pods in the cluster.

As mentioned in the Pre-reps section, I’ve got a wildcard DNS entry that redirects *.prod.example.com to the IP address of the NSX-T L7 Load Balancer. This will allows my developers to use the native kubernetes ingress services to define the hostname of their applications without having to work with me or my infrastructure team to manually update DNS records every time they want to expose an application to the public.

Enough talk, let’s deploy our ingress controller! I’ve used the .yaml file below to expose my Kubeapps dashboard at kubeapps.prod.example.com:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubeapps-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: kubeapps.prod.example.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: kubeapps 
          servicePort: 80

As we can see, we are telling the Ingress service to target the kubeapps service on port 80 to “proxy” the dashboard to the public. Now let’s create that ingress resource:

$ kubectl create -f kubeapps-ingress.yaml -n kubeapps

And review the service to get our hostname and confirm IP address of the NSX-T L7 Load Balancer:

$ kubectl get ing -n kubeapps
NAME               HOSTS                       ADDRESS                     PORTS   AGE
kubeapps-ingress   kubeapps.prod.example.com   10.96.59.106,100.64.32.27   80      96m

Note, the 10.96.59.106 address is the IP of the NSX-T Load Balancer, which is where my DNS wildcard is directing requests to, and the HOSTS entry is the hostname our Kubeapps dashboard should be accessible on. So let’s check it out!

Now we’re ready to deploy applications in our kubernetes cluster with the click of a button!!

Behind the Scenes with NSX-T

So let’s have a look at what’s actually happening in NSX-T and how we can cross reference this with what’s going on with our Kubernetes resources. As I mentioned earlier, any time an Enterprise PKS cluster is provisioned, two NSX-T Load Balancers are created automatically:

  • An L4 load balancer that fronts the kubernetes master(s) to expose the kubernetes API to external users
  • An L7 load balancer that acts as the ingress controller for the cluster

So, we’ve created an ingress resource for our Kubeapps dashboard, let’s look at what’s happening in the NSX-T manager.

So let’s navigate to the NSX-T manager, login with our admin credentials and navigate to the Advanced Networking and Security tab. Navigate to Load Balancing and choose the Server Pools tab on the right side of the UI. I’ve queried the PKS API to get the UUID for my cluster (1cd1818c...), which corresponds with the LB we want to inspect (Note: you’ll see two LB entries for the UUID mentioned, one for kubernetes API, the other for the ingress controller):

Select the Load Balancer in question and then select the Pool Members option on the right side of the UI:

This will show us two kubernetes pods and their internal IP addresses. Let’s go back to the CLI and compare this with what we see in the cluster:

$ kubectl get pods -l app=kubeapps -o wide -n kubeapps
NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE                                   
kubeapps-7cd9986dfd-7ghff   1/1     Running   0          124m   172.16.17.6   0faf789a-18db-4b3f-a91a-a9e0b213f310
kubeapps-7cd9986dfd-mwk6j   1/1     Running   0          124m   172.16.17.7   8aa79ec7-b484-4451-aea8-cb5cf2020ab0

So this confirms that our 2 pods serving out our Kubeapps dashboard are being fronted by our L7 Load Balancer in NSX-T.

Conclusion

I know that was a lot to take in but I wanted to make sure to review what the actions we performed in this post:

  • Created a serviceaccount and clusterrolebinding to allow Kubeapps to deploy apps
  • Deployed our Kubeapps application via a Helm Chart
  • Exposed the Kubeapps dashboard for external access via our NSX-T “ingress controller”
  • Verified that Enterprise PKS and NSX-T worked together to automate the creation of all of these network resources to support our applications

As I mentioned above, stay tuned for a follow up post that will detail security implications for deploying Helm and Kubeapps in Production environments. Thanks for reading!!!

Creating a virtualenv with Python 3.7.3

As I’ve mentioned in recent posts, VMware’s Container Service Extension 2.0 (CSE) has recently been released. The big news around the 2.0 release is the ability to provision Enterprise PKS clusters via CSE.

It’s important to note that CSE 2.0 has a dependency on Python 3.7.3 or later. I had some trouble managed different versions of Python3 on the CentOS host I used to support the CSE server component. I wanted to document my steps in creating a virtual environment via virtualenv utilizing Python 3.7.3 and installing CSE Server 2.0 within the virtual environment.

virtualenv is a tool to create isolated Python environments. virtualenv creates a folder which contains all the necessary executables to use the packages that a Python project would need. This is useful in my situation as I had various versions of Python 3 installed on my CentOS server and I wanted to ensure Python 3.7.3 was being utilized exclusively for the CSE installation while not effecting other services running on the server utilizing Python3.

Installing Python 3.7.3 on CentOS

The first thing we need to do is install (and compile) Python 3.7.3 on our CentOS server.

We’ll need some development packages and the GCC compiler installed on the server:

# yum install -y zlib-devel gcc openssl-devel bzip2-devel libffi-devel

Next, we’ll pull down the Python 3.7.3 bits from the official Python site and unpack the archive:

# cd /usr/src
# wget https://www.python.org/ftp/python/3.7.3/Python-3.7.3.tgz
# tar xzf Python-3.7.3.tgz
# cd Python-3.7.3

At this point we need to compile the Python source code on our system. We’ll use altinstall as not to replace the system’s default python binary located at /usr/bin/python:

# ./configure --enable-optimizations
# make altinstall

Now that we’ve compiled our new version of Python, we can clean up the archive file and check our python3.7 version to ensure we compiled our source code correctly:

# rm /usr/src/Python-3.7.3.tgz
# python3.7 -V
Python 3.7.3

Finally, we need to use pip to install the virtualenv tool on our server:

# pip3.7 install virtualenv

Creating our virtualenv

Now we’re ready to create our virtual environment within which to install CSE 2.0 server. First, let’s create a user that we’ll utilize to deploy the CSE server within the virtual environment. We can create the user and then switch to that user’s profile:

# useradd cse
# su - cse

Now we need to create a directory that will contain our virtual environment. In this example, I used the cse-env directory to house my virtual environment:

$ mkdir ~/cse-env

Now we need to create our virtual environment for our Python 3.7.3 project:

$ python3.7 -m virtualenv cse-env
Using base prefix '/usr/local'
New python executable in /home/cse/cse-env/bin/python3.7
Also creating executable in /home/cse/cse-env/bin/python
Installing setuptools, pip, wheel...
done.

Before you can start installing or using packages in the virtual environment, we’ll need to activate it. Activating a virtual environment will put the virtual environment-specific python and pip executables into your shell’s PATH. Run the following command to activate your virtual environment:

$ source ~/cse-env/bin/activate

Now check the default python version within the environment to verify we are using 3.7.3:

$ python -V
Python 3.7.3
$ pip -V
pip 19.1.1 from /home/cse/cse-env/lib/python3.7/site-packages/pip (python 3.7)

Now we’re ready to install the CSE server and we won’t have to worry about Python version conflicts as we are installing the CSE packages within our virtual environment, which will only utilize Python 3.7.3.

Stay tuned for my next post which will walk through an installation of Container Service Extension server!!

Creating a PvDC for Enterprise PKS in vCloud Director

If you read up on my recent blog post regarding RBAC in the new release of VMware’s Container Service Extension for vCloud Director, you may have noticed that I mentioned a follow-up post regarding the steps required to add an Enterprise PKS controlled vCenter Server to vCloud Director. I wanted to take a little bit of time to go through that process as it’s a relatively new workflow.

First of all, in our lab deployment, we are using an NSX-T backed vSphere environment to provide networking functionality to the Enterprise PKS deployment. As you may know, NSX-T integration is fairly new in the vCloud Director world (and growing every day!). With this in mind, the process of adding the vSphere/NSX-T components into vCD are a little bit different. Let’s have a look at the workflow for creating a Provider Virtual Datacenter (PvDC) that will support our tenant using CSE to provision Enterprise PKS kubernetes clusters.

Logging into the HTML5 vCloud Director Admin Portal

The first point to note is that we can only add a vSphere environment backed by NSX-T in the HTML5 admin portal in the current release of vCD (9.7 at the time of writing). Let’s navigate to https://vcd-director-url.com/provider and login:

Adding vCenter Server

First, we need to add our vCenter Server (vCSA) that is managed by Enterprise PKS to our vCD environment. Select the menu at the top of the page and select the vSphere Resources option and select the Add option above your list of existing vCSAs:

Next, we will fill out all of the required information vCD requires to connect to our vCSA. After filling out the required information, select Next:

On the NSX-V Manager section, we want to ensure that we disable the Configure Settings option here as we will be utilizing a vSphere environment backed by NSX-T, as opposed to NSX-V. After disabling the NSX-V setting, select Next:

Finally, review the configuration information and select Finish to add the vCSA to your vCD deployment:

Add NSX-T Manager

Now that we’ve adding our vCSA sans NSX-V manager, we need to add our NSX-T manager to our vCD deployment. Select the NSX-T Managers menu from the left side of the portal and then select the Add option to plug our NSX-T Manager information in:

Once we fill out the required information, we can select the Save button to finish the process:

Once we verified the action is successful in the Task menu, we are ready to create our PvDC!

Creating a PvDC with our PKS vCSA and NSX-T Manager

Normally, we would be able to create PvDCs in the WebUI but for PvDCs that are backed by NSX-T, we can only create them via the API. We will use the vcd-cli to accomplish this. First, we need to log in to the as a cloud admin user

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

Now, we use the following command to create our new PvDC where:

"PKS-PVDC" is the name of our new PvDC • "ent-cse-vcsa" is the name of our newly added vCSA • "pks-nsx-t-mgr" is the name of our newly added NSX-T manager • "*" is our storage profile • "pks-cluster" is our resource pool • "--enable" to ensure the PvDC is enabled upon creation

vcd pvdc create PKS-PVDC ent-cse-vcsa -t pks-nsx-t-mgr -s "*" -r pks-cluster -–enable

Now, let’s navigate back to the portal to ensure the PvDC is present and enabled. Select the Cloud Resources options from the top menu and the Provider VDCs option from the left menu:

Create our Organization and Organization Virtual Datacenters

Now that we’ve built our PvDC out, we are ready to create our tenant org and create a virtual datacenter for that tenant to utilize for their Enterprise PKS workloads.

First, navigate to the Organizations option on the left menu and select the Add option above the list of orgs:

Fill out the required information to create the org and select the Create button:

We now need to create an Organization Virtual Datacenter (OvDC) to support our org. Select the Organization VDC option from the left menu and select the New button:

I won’t walk through the options here as it’s well documented but you will need to define your Organization, PvDC, Allocation Model, Allocation Pool, Storage Policies, and Network Pool so users in your tenant org have resources to use when provisioning.

At this point, we have done all the pre-work required and we’re ready to connect this OrgVDC to our Container Service Extension instance and start provisioning our Enterprise PKS clusters in vCD!!

Implementing RBAC with VMware’s Container Service Extension 2.0 for vCloud Director

In case you haven’t heard, VMware recently announced the general availability of the Container Service Extension 2.0 release for vCloud Director. The biggest addition of functionality in the 2.0 release is the ability to use CSE to deploy Enterprise PKS clusters via the vcd-cli tool in addition to native, upstream Kubernetes clusters. I’ll be adding a blog post shortly on the process required for enabling your vCD environment to support Enterprise PKS deployments via the Container Service Extension.

Today, we are going to talk about utilizing the RBAC functionality introduced in CSE 1.2.6 to assign different permissions to our tenants to allow them to deploy Enterprise PKS (CSE Enterprise) clusters and/or native Kubernetes clusters (CSE Native). The cloud admin will be responsible for enabling and configuring the CSE service and enabling tenant admin/users to deploy CSE Enterprise or CSE Native clusters in their virtual datacenter(s).

Prerequisites

  • The CSE 2.0 server is installed and configured to serve up native Kubernetes clusters AND Enterprise PKS clusters. Please refer to the CSE documentation for more information on this process.
  • Must have at least two organizations present and configured in vCD. In this example, I’ll be utilizing the following orgs:
    • cse-native-org (native k8 provider)
    • cse-ent-org (PKS Enterprise k8 provider)
  • This example also assumes none of the organizations have been enabled for k8 providers up to this point. We will be starting from scratch!

Before We Begin

As noted above, this example assumes we have CSE 2.0 installed already in our environment, but I wanted to take some time to call out the process for enabling RBAC in CSE. When installing CSE, all we need to do to enable RBAC is ensure the enforce_authorization is set to true in the service section of the config.yaml file:

…output omitted…

service:
  enforce_authorization: true
  listeners: 5

…output omitted…

Please note, if we set the flag to false, any user with the ability to create compute resources via vCD will also be able to provision k8 clusters.

Enabling the “cse-native-org” Organization

The first thing we’ll need to do is grant access to the “cse-native-org” to perform CSE Native operations. We’ll first need to login to the vCD instance using the vcd-cli command with a system admin user, then we can add the right to the org.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

Now we can grant the org “cse-native-org” the right to deploy native k8 clusters:

$ vcd right add -o ‘cse-native-org’ "{cse}:CSE NATIVE DEPLOY RIGHT"

At this point, we have enabled the tenant with the ability to provision clusters but is that enough? What happens when we log in and attempt to provision a cluster with a user who belongs to that tenant? We’ll run the create cluster command where test-cluster is the name we assign to our cluster and nodes is the number of worker nodes we’d like to deploy:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights.

Here we see the RBAC feature in action! Because we haven’t added the { cse}:CSE NATIVE DEPLOY RIGHT} right to the role associated with the user, they aren’t allowed to provision k8 clusters. NOTE: If RBAC is not enabled, any user in the org will be able to use CSE to deploy clusters for the cluster type their org is enabled for.

So let’s log back in as the administrator and give our tenant admin user the ability to provision k8 clusters. We have created a role in vCD for this user that mimics the “Organization Admin” permission set and named it cse-admin. The cse-native-admin user has been created with the cse-admin role.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd user create ‘cse-native-admin’ ‘password’ ‘cse-admin’

$ vcd role add-right 'cse-admin' "{cse}:CSE NATIVE DEPLOY RIGHT"

Finally, we need to enable the tenant to support native k8 cluster deployments:

$ vcd cse ovdc enable native-ovdc -o cse-native-org -k native
metadataUpdate: Updating metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664)
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664), result: success

Now that we have given our user the right to create clusters, let’s give the cluster create command another try:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
create_cluster: Creating cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating cluster vApp test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating master node for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Initializing cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating 1 node(s) for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Adding 1 node(s) to test-cluster(7f509a1c-4743-407d-95d3-355883191313)
task: 3de7f52f-e018-4332-9731-a5fc99bde8f8, Created cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313), result: success

Success!! Our user was now able to provision their cluster!! Now we can get some information about the provisioned k8 cluster and grab our k8 cluster config so we can access our new cluster with kubectl:

$ vcd cse cluster info test-cluster
property         value
---------------  -------------------------------------------------------------------------------
cluster_id       7f509a1c-4743-407d-95d3-355883191313
cse_version      2.0.0
leader_endpoint  10.10.10.210
master_nodes     {'name': 'mstr-4su4', 'ipAddress': '10.10.10.210'}
name             test-cluster
nfs_nodes
nodes            {'name': 'node-utcz', 'ipAddress': '10.10.10.211'}
number_of_vms    2
status           POWERED_ON
template         photon-v2
vapp_href        https://vcd.example.com/api/vApp/vapp-065141f8-4c5b-47b5-abee-c89cb504773b
vapp_id          065141f8-4c5b-47b5-abee-c89cb504773b
vdc_href         https:// vcd.example.com/api/vdc/f703babd-8d95-4e37-bbc2-864261f67d51
vdc_name         native_ovdc

$ vcd cse cluster config test-cluster > ~/.kube/config

$ kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
mstr-4su4   Ready    master   1d    v1.10.11
node-utcz   Ready    <none>   1d    v1.10.11

Now we’re ready to provision our first Kubernetes app!

Enabling the “cse-ent-org” Organization

Now, our cloud admin has received a request from another tenant (cse-ent-org) that they would like users in their org to be able to provision Enterprise PKS clusters. Our cloud admin will follow the same workflow documented in the previous example but substitute the “CSE Enterprise” rights for the “CSE Native” rights.

Let’s take a look at what happens if a user in the cse-ent-org tries to login and provision a cluster before our cloud admin has enabled the right to do so:

vcd login vcd.example.com cse-ent-org cse-ent-user -iw
Password:
cse-ent-user logged in, org: 'cse-ent-org', vdc: 'pks-ovdc'

$ vcd cse cluster create test-cluster
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Org VDC is not enabled for Kubernetes cluster deployment

As expected, this errors out because our cloud admin has not enabled the right to deploy k8 clusters in the org. Now, our cloud admin will login and enable the right to deploy Enterprise PKS clusters via CSE in the cse-ent-org tenant:

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd right add "{cse}:PKS DEPLOY RIGHT" -o cse-ent-org
Rights added to the Org 'cse-ent-org'

Just as in the previous example, we need to create a user and a role that will allow our user to provision k8 clusters in this org. We have created a custom role in this example that mimics the vAPP Author permissions and named it pks-k8-role. The role has been assigned to the user that needs to create k8 clusters. Then, we need to give that user role the right to deploy Enterprise PKS clusters:

$ vcd user create ‘cse-ent-user’ ‘password’ ‘pks-k8-role’

$ vcd role add-right "pks-k8-role" "{cse}:PKS DEPLOY RIGHT"

The user in the tenant org has been granted rights by the cloud admin, now we need to enable the org for to allow deployment of Enterprise PKS clusters:

$ vcd cse ovdc enable pks-ovdc -o cse-ent-org -k ent-pks -p "small" -d "test.local" 

metadataUpdate: Updating metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664) 
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664), result: success

Note: When enabling an org for Enterprise PKS, we need to define the plan and the domain to be assigned to the instances (and load-balancer) that PKS will provision. Currently, you can only enable one plan per org, but you can run the above command again with a different plan if you’d like to switch in the future.

It’s also worth mentioning that you can create separate OrgVDCs within the same org and enable one OVDC for Native K8 and the other for Enterprise PKS if users in the same tenant org have different requirements.

Finally, we are ready to provision our PKS cluster. We’ll login as our cse-ent-user and deploy our cluster:

$ vcd login vcd.example.com cse-ent-org cse-ent-user -iw 
Password: 
cse-ent-admin logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks
property                     value
---------------------------  --------------------------------------------------------
compute_profile_name         cp--41e132c6-4480-48b1-a075-31f39b968a50--cse-ent-ovdc-1
kubernetes_master_host       test-cluster-pks.test.local
kubernetes_master_ips        In Progress
kubernetes_master_port       8443
kubernetes_worker_instances  2
last_action                  CREATE
last_action_description      Creating cluster
last_action_state            in progress
name                         test-cluster-pks
pks_cluster_name             test-cluster-pks---5d33175a-3010-425b-aabe-bddbbb689b7e
worker_haproxy_ip_addresses

We can continue to monitor the status of our cluster create with the cluster list or cluster info commands:

$ vcd cse cluster list 

k8s_provider    name              status            vdc
--------------  ----------------  ----------------  --------------
ent-pks         test-cluster-pks  create succeeded  pks-ovdc

Now that we have verified out cluster has been created successfully, we need to obtain the config file so we can access the cluster with kubectl:

$ vcd cse cluster config test-cluster > ~/.kube/config

Now our user is ready to deploy apps on their PKS cluster!

As a final test, let’s see what happens when a user in the same org that we just enabled for Enterprise PKS (cse-ent-org) tries to provision a cluster. This user (vapp-user) has been assigned the “vApp Author” role as it exists “out of the box.”

$ vcd login vcd.example.com cse-ent-org vapp-user -iw 
Password: 
Vapp-user logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks-2 
Usage: vcd cse cluster create [OPTIONS] NAME 
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights. 

There we have it, RBAC in full effect!! The user can not provision a cluster, even though the org is enabled for Enterprise PKS cluster creation, because their assigned role does have the rights to do so.

Conclusion

This was a quick overview of the capabilities provided by the Role Based Access Control functionality present in VMware’s Container Service Extenstion 2.0 for vCloud Director. We were able to allow users in orgs to provision k8 clusters of both the native and Enterprise PKS variants. We also showcased how we can prevent “unprivileged” users in the same org from provisioning k8 clusters as well. Hope you found it useful!!

Deploying VMware vCloud Director on a Single Virtual Machine with a Single Network Interface

Recently, while testing the new Container Service Extension 2.0 Beta release, I found myself needing a quick (and easily replicable) instantiation of vCloud Director in my lab environment. Being this needed to be deployed in my lab environment, I wanted to do this while using the least amount of resources and virtual machines possible to keep things simple. I decided to deploy a single CentOS virtual machine that housed the postgresdb, rabbitmq server (for my subsequent deployment of CSE), and the actual vCD server itself. I also decided to deploy using a single network interface to keep things simple.

Before we get started, I want to lay out some assumptions I’ve made in this environment that will need to be taken in consideration if you’d like to replicate this deployment as documented:

  • All of my servers hostnames are resolvable (I’m using dnsmasq to easily provide DNS/dhcp support in my lab)

  • I’ve disabled firewalld as well as this lab is completely isolated from outside traffic. This is NOT secure and NOT recommend for a production deployment. See the installation documentation for port requirements for vCD.

  • I’ve also persistently disabled SElinux. Again, this is NOT secure and NOT recommending for production but just wanted one less thing to troubleshoot barring config issues.

  • I’ve configured an NTP server in my lab that all the servers connect to. NTP is a requirement for vCD installation.

  • I am going to use the tooling provided by vCD to create self-signed SSL certs for use with vCD. Again, this is NOT secure and NOT recommending for production, but better suited for quick test deployments in a controlled lab environment.

I’ve configured a CentOS 7.6 server with 4 vCPU, 8GB of memory and a 20GB hard drive. After installation of my OS, I verify the configuration stated above and update my server to the latest and greatest:

yum update -y

Installing PostgreSQL

At this point, we are ready to install and configure our PostgreSQL database (note: vCD requires PostgreSQL 10).

First, we’ll need to configure our server to have access to the PostgreSQL repo:

# rpm -Uvh https://yum.postgresql.org/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

Now that we have configured the repo, we need to install the PostgreSQL 10 packages:

# yum install -y postgresql10-server postgresql10

Now that the database packages are installed, we need to initialize the database, start the service, and ensure it starts automatically at boot:

# /usr/pgsql-10/bin/postgresql-10-setup initdb
# systemctl start postgresql-10.service
# systemctl enable postgresql-10.service

Now that Postgres is installed, let’s verify the installation by logging in to the database with the “postgres” user (created during installation) and set the password:

# su - postgres -c "psql"

psql (10.0)
Type "help" for help.
postgres=# \password postgres
**enter pw at prompt**
postgres=# \q

We can run the createuser command as the postgres OS user to create the vcloud postgres user:

# su - postgres
-bash-4.2$ createuser vcloud --pwprompt

Log back into the psql prompt to create the database the vCD instance will utilize (vcloud), as well as setting the vcloud user password:

-bash-4.2$ psql
postgres=# create database vcloud owner vcloud;
CREATE DATABASE
postgres=# alter user vcloud password ‘your-password’;
ALTER ROLE

Next, we’ll need allow our vcloud user to login to the database:

postgres=# alter role vcloud with login;
ALTER ROLE
postgres=# \q

Finally, we need to allow logins to the Postgres DB with a username/pw combination. Since I’m deploying this in a controlled lab environment, I’m going to open connections up to all IP addresses. Add the following lines to the bottom of the ~/10/data/pg_hba.conf file (editing as the postgres user):

-bash-4.2$ vi ~/10/data/pg_hba.conf

host all all 0.0.0.0/0 md5

We also need to ensure that the database is listening for connections. Edit the postgresql.conf file and ensure the following line is not commented out and change ‘localhost’ to ‘*’:

-bash-4.2$ vi 10/data/postgresql.conf

listen_addresses = '*'

Now that we’ve made these changes, return to the root user and restart the psql service:

-bash-4.2$ exit
# systemctl restart postgresql-10

Installing RabbitMQ

Now that we’ve got our PostgreSQL DB configured, we need to configure RabbitMQ on the server. AMQP, the Advanced Message Queuing Protocol, is an open standard for message queuing that supports flexible messaging for enterprise systems. vCloud Director uses the RabbitMQ AMQP broker to provide the message bus used by extension services, object extensions, and notifications.

On our CentOS install, we need to configure access to the EPEL repo, which provides packages and dependencies we’ll need to install RabbitMQ. After configuring the repo, we need to install Erlang, which is the language RabbitMQ is written in:

# yum -y install epel-release
# yum -y install erlang socat

For linux installs, RabbitMQ provides an RPM which is precompiled and can be installed directly on the server (once ‘erlang’ in installed). Download and install RabbitMQ via the commands below:

# wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.10/rabbitmq-server-3.6.10-1.el7.noarch.rpm
# rpm -Uvh rabbitmq-server-3.6.10-1.el7.noarch.rpm

Now that we have installed RabbitMQ on the server, we are ready to start the RabbitMQ server, ensure it automatically starts on boot, and verify that status of the service:

# systemctl start rabbitmq-server
# systemctl enable rabbitmq-server
# systemctl status rabbitmq-server

Once we’ve verified the status of the RabbitMQ service is “active,” we need to set up an admin user (I’ve used admin in the case, but you can configure any username you’d like) to allow connections to the queue from vCD:

# rabbitmq-plugins enable rabbitmq_management
**output omitted**
# chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/
# rabbitmqctl add_user admin **your-password**
Creating user "admin"
# rabbitmqctl set_user_tags admin administrator
Setting tags for user "admin" to [administrator]
# rabbitmqctl set_permissions -p / admin ".*" ".*" ".*"
Setting permissions for user "admin" in vhost "/"

Installing vCloud Director

We’ve got PostgreSQL and RabbitMQ configured on the server, now we are ready to pull down and install the vCD binary. I’ve pulled the vCD install package directly from MyVMware down to my local desktop and copied the file over to my vCD server at /vcloud.bin and modified permissions so I can execute the script. Before we run the script, we need to install a couple of dependencies the script requires to run to completion:

# yum install libXdmcp libXtst redhat-lsb -y

Now we are ready to run the installation script. After the script finishes, decline the option to run the configure script as we will do this manually later:

# chmod u+x /vcloud.bin
# ./vcloud.bin

**output omitted**

Would you like to run the script now? (y/n)? N

Now that we’ve installed the vCD packages, we can use the tooling provided to generate self-signed certificates. If you have existing certs or you’d like to create and sign your own certs, please refer to the installation documentation for the proper prodecure to create signed certs or upload existing certs. The following command creates certificates for the http and console proxy services and stores them in a keystore file at /tmp/cell.ks with a password of mypassword

# cd /opt/vmware/vcloud-director/bin
# ./cell-management-tool generate-certs -j -p -o /tmp/cell.ks -w mypassword

We can verify the keystore contains 2 keys with the following command:

# /opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS \
-storepass mypassword -keystore /tmp/cell.ks -list
**output omitted**

consoleproxy, May 6, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA1): 7B:FB...
http, May 6, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA1): 14:DD…

Configuring vCloud Director

Now that we have created our certs, we are ready to configure the vCD server. Since we are using the same interface for http and console proxy, we need to perform an unattended install and define ports for each service. For details on this process, see the installation documentation section for unattended installations. As an example, the following command configures both http and console proxy on the same IP (10.10.10.100), using the default port 443 for secure http access while using 8443 for secure console access. We also define the keystore, created earlier as well as the password for that keystore.

First, let’s change directory into the location of the configure script:

# cd /opt/vmware/vcloud-director/bin

Now we are ready to run the configure command:

# ./configure -ip 10.10.10.100 -cons 10.10.10.100 --primary-port-http 80 \
--console-proxy-port-https 8443 -dbtype postgres \
-dbhost 10.10.10.100 -dbname vcloud -dbuser vcloud \
-dbpassword **db-password** -k /tmp/cell.ks -w mypassword \
--enable-ceip false -unattended
......................................../
Database configuration complete.

We can view the logs for the configuration attempt in the directory /opt/vmware/vcloud-director/logs/ at the configure-timestamp location:

# cd /opt/vmware/vcloud-director/logs/
# less configure-timestamp

**outpit omitted**

vCloud Director configuration is now complete.
Once the vCloud Director server has been started you will be able to
access the first-time setup wizard at this URL:
https://FQDN

At this point, we are ready to start the vCD service:

# service vmware-vcd start    

After confirming the service has started, navigate to https://FQDN to begin your vCD configuration!!

Enjoy!!

Welcome!!

Well, here we are. Welcome to mannimal.blog!! 

This space will serve as a place to publish some best practices around solution design for cloud providers looking to build modern platforms based on VMware technology. 

But before we get into that, I wanted to give a little background on myself. My name is Joe Mann and I am a Staff Cloud Solutions Architect at VMware. I cover the VMware Cloud Provider Program (VCPP) with a focus on cloud native technologies. I spent the last 7 years of my career as a Solutions Architect at Red Hat covering the Red Hat Cloud and Service Provider Program. As you can tell, my focus in this space has been helping cloud and service providers build modern infrastructure to support their customers’ ever-evolving needs. 

Stay tuned for some upcoming posts that will focus on vCloud Director install and config as well as a look at the 2.0 Beta release of the Container Service Extension. Thanks for stopping by!!