Data Protection in VMware Tanzu Mission Control

Ever “accidentally” delete your app or namespace from your Kubernetes cluster? Or even worse, destroyed your entire cluster?!?! Well… have no fear, the Tanzu Mission Control team recently announced the release of the Data Protection feature for Tanzu Mission Control. This new feature utilizes the open source project Velero to provide backup, migration, and recovery functionality for any Kubernetes cluster under the control of Tanzu Mission Control. As mentioned in the previously linked blog post, Tanzu Mission Control handles the installation and on-going lifecycle management of the Velero components running on the cluster so no knowledge of Velero is required to take advantage of this new feature!

In this blog post, I will walk through the process of utilizing the Data Protection feature to backup a WordPress application deployed on a Tanzu Kubernetes Grid (TKG) cluster in AWS. The WordPress application will utilize persistent volume claims (PVCs) to store persistent data to support the blog. After taking the backup, I will simulate a data loss scenario by deleting the namespace containing the application and then use the Tanzu Mission Control console to restore the application and its persistent data!

Continue reading “Data Protection in VMware Tanzu Mission Control”

Upgrading to CSE 2.6.0 from CSE 2.5.1

In my last post, I gave an update and overview regarding the new Container Service Extension 2.6.0 release, including a look at the new CSE UI Plugin for VCD. As I had to perform the operation myself, I wanted to take some time to detail the process for upgrading existing CSE installations to 2.6.0. There aren’t many changes from 2.5.1 to 2.6.0, as far as server installation goes, but there are a few to note.

Create New virtualenv and Install CSE 2.6.0 Bits

I like to utilize python virtual environments with my CSE installs, which allows me to jump back and forth between CSE builds as I am working with the engineering team to test new releases or set up reproducers for customer environments. I recommend usage of the virtual environment tool, at the very least, so you don’t have to wrestle with base Python version compat on your OS. See my post on creating a Python 3.7.3 virtual environment to support CSE server installations here.

So the first thing I’ll do on my CSE server is create that new virtual environment in the cse-2.6.0 directory:

$ mkdir cse-2.6.0

$ python3.7 -m virtualenv cse-2.6.0/

$ source cse-2.6.0/bin/activate

Now I’m ready to install the new build of CSE:

$ pip install container-service-extension

$ cse version
CSE, Container Service Extension for VMware vCloud Director, version 2.6.0

Note: If you’d like to use the same virtual environment as you used in your previous installation, you simply need to source that virtual environment and upgrade CSE:

$ pip install --upgrade container-service-extension

Continue reading “Upgrading to CSE 2.6.0 from CSE 2.5.1”

Introducing the CSE 2.6.0 Release with VCD UI Plugin

I am thrilled to announce that the Container Service Extension version 2.6.0 for VMware Cloud Director is generally available, please see release notes here. I wanted to take some time to go over some of the new features introduced with CSE 2.6.0, including the much anticipated VCD UI plugin, which allows providers to enable tenants to deploy CSE Kubernetes clusters via the VCD tenant portal!!

In this post, I’ll walk through some of the new features introduced in the CSE 2.6.0 release as well as step through the installation of the CSE UI Plugin for VCD.

Continue reading “Introducing the CSE 2.6.0 Release with VCD UI Plugin”

Deploy Kubernetes Clusters to vSphere with CAPV 0.6.0

So you spent a week putting together a two part blog post for how to deploy clusters using Cluster API Provider vSphere. You feel pretty good about yourself, right? Well guess what, a new version of CAPV is right around the corner so you better update that blog post!! Well that’s why we’re here, things move fast in the world of Kubernetes…

With the release of Cluster API v1alphav3, the CAPV team has also released a new build of CAPV (0.6.0) with support for v1alpha3. You can review all of the changes from alpha2 to alpha3 here but the main change we’ll look at in this blog post is the creation of the management cluster and workload clusters with clusterctl and how that is different in v1alpha3.

In my previous series of posts on using CAPV to deploy Kubernetes clusters to vSphere environments, I specifically dealt with some of the requirements to support this type of deployment in VMware Cloud on AWS. I won’t be rehashing all of that in this post so feel free to refer to the original posts if you’d like to learn the specifics of deploying clusters to VMC with CAPV.

Continue reading “Deploy Kubernetes Clusters to vSphere with CAPV 0.6.0”

Kubernetes Cluster Creation in VMware Cloud on AWS with CAPV: Part 2

In Part 1 of my series on deploying Kubernetes clusters to VMware on AWS environments with ClusterAPI Provider vSphere, I detailed the processes required to stand up the CAPV management plane. After completing those steps, I am ready to provision a workload cluster to VMC using CAPV.

Creating Workload Clusters

The CAPV cluster is the brains of the operations but I still need to deploy some workload clusters for my teams of developers to deploy their applications onto. The management cluster helps automate the provisioning of all of the provider components to support my workload clusters as well as instantiating the VMs provisioned as a Kubernetes cluster. The basic use case here is that I, as the infrastructure admin, am responsible for utilizing the CAPV management cluster to provision multiple workload clusters that can support individual teams of developers, individual application deployments, etc. The CAPV management cluster allows me to easily deploy a consistent cluster in a repeatable fashion with very little manual effort. I can quickly deploy a test, dev, prod set of clusters for a team or deploy 5 different workload clusters for 5 different groups of developers.

Continue reading “Kubernetes Cluster Creation in VMware Cloud on AWS with CAPV: Part 2”

Kubernetes Cluster Creation in VMware Cloud on AWS with CAPV: Part 1

One of the biggest challenges in starting a Cloud Native practice is understanding how to establish a repeatable and consistent method of deploying and managing Kubernetes clusters. That’s where ClusterAPI comes in handy!! ClusterAPI (CAPI) is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster. Now you can use Kubernetes to create more Kubernetes!!!!

ClusterAPI is responsible for provisioning all of the infrastructure required to support a Kubernetes cluster. CAPI also provides the ability to perform Day2 operations, such as scaling and upgrading clusters. Most importantly, it provides a consistent management plane to perform these actions on multiple clusters. In fact, ClusterAPI is a big part of what will allow VI admins to orchestrate and automate the provisioning of Kubernetes clusters natively as a part of vSphere with Project Pacific. Learn more about the Project Pacific architecture and how it utilizes ClusterAPI here.

Continue reading “Kubernetes Cluster Creation in VMware Cloud on AWS with CAPV: Part 1”

Container Service Extension 2.5 Installation: Part 3

In Parts 1 and 2 of my series on installing and configuring the Container Service Extension for VMware Cloud Director, I focused on setting the CSE server up to support CSE Standard Kubernetes cluster creation.

CSE Standard clusters are comprised of deployed vApps that utilize NSX-V networking resources, utilizing Weave as the Container Network Interface for the Kubernetes clusters. In Part 3 of my series, I wanted to take some time to look at configuring the CSE Server to support the creation of CSE Enterprise Kubernetes clusters. CSE Enterprise clusters are comprised of VMware Enterprise PKS Kubernetes clusters deployed on top of NSX-T networking resources, utilizing the NSX Container Plugin as a CNI. CSE Enterprise brings enterprise grade features and functionality to CSE that include, but are not limited to:

  • HA, multi-master Kubernetes clusters
  • Dynamic persistent storage provisioning with the vSphere Cloud Provider integration
  • Automated Day 1 and Day 2 Kubernetes cluster management via Bosh Director
  • Microsegmentation capability for Kubernentes resources via integration with NSX-T
  • Automated creation of Kubernetes service type LoadBalancer and ingress resrouces via NSX-T L4/L7 load balancers
  • Support for Harbor, an open source cloud native registry

Continue reading “Container Service Extension 2.5 Installation: Part 3”

Exploring the Nirmata Kubernetes Extension for VMware Cloud Director

If you’ve been following my blog, you know that a lot of the content I publish focuses on VMware’s Container Service Extension and it’s integration with VMware Cloud Director, which allows service providers to create a Kubernetes-as-a-Service experience for their tenants utilizing their existing VCD-managed infrastructure.

Recently, myself and my colleague at VMware, Daniel Paluszek partnered with Nirmata to perform some testing on their new Kubernetes Extension for VMware Cloud Director. The Nirmata Kubernetes Extension for VCD builds on the rich UI experience already present in the VCD tenant portal by providing a workflow for provisioning Kubernetes clusters via CSE using the native UI.

The Native CSE Experience

As I’ve written about in my previous posts on CSE, once a service provider enables a tenant to provision Kubernetes clusters via CSE, tenants will use the vcd-cli with a CSE extension enabled to provision and manage Kubernetes clusters. For example, a tenant would log in to their VCD Org through the vcd-cli and issue the following command to create a Kubernetes cluster via CSE:

$ vcd cse cluster create k8-cluster-1 --network outside --nodes 1

where k8-cluster-1 is the name of the cluster, --network is the OvDC network the cluster will nodes will utilize, and --nodes 1 defines the number of worker nodes the cluster will contain.

While many users are familiar enough with a CLI to adapt to this method of resource provisioning, one piece of feedback we get from our partner community is that they’d like to offer a native UI experience in the tenant portal to allow their end customers to more intuitively provision Kubernetes clusters via VCD. That’s where the Nirmata Kubernetes Extension for VCD comes in…

Continue reading “Exploring the Nirmata Kubernetes Extension for VMware Cloud Director”

Container Service Extension 2.5 Installation: Part 2

Building on Part 1 of my series on installing VMware’s Container Service Extension 2.5.0, in this post, I’ll walk through the process of configuring a client server to interact with CSE via the vcd-cli tool. I’ll also walk through the process of onboarding a tenant as well as the workflow, from the tenant’s perspective, of provisioning and managing a Kubernetes cluster.

Configuring a CSE Client

Now that I’ve deployed my CSE server, I’ll need to utilize the the vcd-cli tool with the CSE client extension enabled in order to interact with the CSE service. For the client server, I am, again, utilizing a CentOS 7.6 server and a Python 3.7.3 virtual environment to install and utilize the vcd-cli tool in this walkthrough.

The first thing I’ll need to do is create and activate my virtual environment, which I will install in the ~/cse-client directory:

$ python3.7 -m virtualenv ~/cse-client
$ source ~/cse-client/bin/activate

Continue reading “Container Service Extension 2.5 Installation: Part 2”

Container Service Extension 2.5 Installation: Part 1

With the recent release of the Container Service Extension 2.5.0, I wanted to take some time to walk through the installation and configuration of the Container Service Extension (CSE) server in conjunction with VMware vCloud Director 10.

This will be a series of 3 blog posts that cover the following topics:

Container Service Extension Overview

Before we get started, I wanted to talk a bit about CSE and what purpose it serves in a Service Provider’s environment. The Container Service Extension is a VMware vCloud Director extension that helps tenants create, lifecycle manage, and interact with Kubernetes clusters in vCloud Director-managed environments.

There are currently two versions of CSE: Standard and Enterprise. CSE Standard brings Kubernetes-as-a-Service to vCD by creating customized vApp templates and enabling tenant/organization administrators to deploy fully functional Kubernetes clusters in self-contained vApps. CSE Standard cluster creation can be enabled on existing NSX-V backed OrgVDCs in a tenant’s environment. With the release of CSE Enterprise in the CSE 2.0 release, VMware has also added the ability for tenants to provision VMware Enterprise PKS Kubernetes clusters back by NSX-T resources in vCloud Director managed environments. In this blog post, I am going to focus on the enablement of CSE Standard Kubernetes cluster creation in an existing vCloud Director OvDC.

For more information on CSE, have a look at the Kubernetes-as-a-Service in vCloud Director reference architecture (authored by yours truly đŸ˜„) as well as the CSE Installation Documentation.

Continue reading “Container Service Extension 2.5 Installation: Part 1”