Backing Up Your Kubernetes Applications with Velero v1.1

In this post, I’m going to walk through the process of installing and using Velero v1.1 to back up a Kubernetes application that includes persistent data stored in persisentvolumes. I will then simulate a DR scenario by completely deleting the application and using Velero to restore the application to the cluster, including the persistent data.

Meet Velero!! ⛵

Velero is a backup and recovery solution built specifically to assist in the backup (and migration) of Kubernetes applications, including their persistent storage volumes. You can even use Velero to back up an entire Kubernetes cluster for restore and/or migration! Velero address various use cases, including but not limited to:

  • Taking backups of your cluster to allow for restore in case of infrastructure loss/corruption
  • Migration of cluster resources to other clusters
  • Replication of production cluster/applications to dev and test clusters

Velero is essentially comprised of two components:

  • A server that runs as a set of resources with your Kubernetes cluster
  • A command-line client that runs locally

Velero also supports the back up and restore of Kubernetes volumes using restic, an open source backup tool. Velero will need to utilize a S3 API-compatible storage server to store these volumes. To satisfy this requirement, I will also deploy a Minio server in my Kubernetes cluster so Velero is able to store my Kubernetes volume backups. Minio is a light weight, easy to deploy S3 object store that you can run on premises. In a production environment, you’d want to deploy your S3 compatible storage solution in another cluster or environment to prevent from total data loss in case of infrastructure failure.

Continue reading “Backing Up Your Kubernetes Applications with Velero v1.1”

Using Harbor and Kubeapps to Serve Custom Helm Charts

In my last post, I walked through the process of deploying Kubeapps in an Enterprise PKS Kubernetes cluster. In this post, I wanted to examine the workflow required for utilizing Harbor, an open source cloud native registry, as an option to serve out a curated set of Helm charts to developers in an organization. We’ll walk through a couple of scenarios, including configuring a “private” project in Harbor that houses Helm charts and container images for a specific group of developers. Building on my last post, we’ll also add this new Helm chart repository into our Kubeapps deployment to allow our developers to deploy our curated applications directly from the Kubeapps dashboard.

Harbor is an an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing. Enterprise support for Harbor Container Registry is included with VMware Enterprise PKS.

Continue reading “Using Harbor and Kubeapps to Serve Custom Helm Charts”

Deploying Kubeapps and Exposing the Dashboard via Ingress Controller in Enterprise PKS

In this post, I’d like to take some time to walk through the process of deploying Kubeapps in an Enterprise PKS kubernetes cluster. I’ll also walk through the process of utilizing the built-in ingress controller provided by NSX-T to expose the Kubeapps dashboard via a fully qualified domain name.

What is Kubeapps?

There’s been a lot of excitement in the Cloud Native space at VMware since the acquisition of Bitnami last year. The Bitnami team has done a lot of amazing work over the years to simplify the process of application deployment across all types of infrastructure, both in public and private clouds. Today we are going to take a look at Kubeapps. Kubeapps, an open source project developed by the folks at Bitnami, is a web-based UI for deploying and managing applications in Kubernetes clusters. Kubeapps allows users to:

  • Browse and deploy Helm charts from chart repositories
  • Inspect, upgrade and delete Helm-based applications installed in the cluster
  • Add custom and private chart repositories (supports ChartMuseum and JFrog Artifactory)
  • Browse and provision external services from the Service Catalog and available Service Brokers
  • Connect Helm-based applications to external services with Service Catalog Bindings
  • Secure authentication and authorization based on Kubernetes Role-Based Access Control

Continue reading “Deploying Kubeapps and Exposing the Dashboard via Ingress Controller in Enterprise PKS”

Creating a virtualenv with Python 3.7.3

As I’ve mentioned in recent posts, VMware’s Container Service Extension 2.0 (CSE) has recently been released. The big news around the 2.0 release is the ability to provision Enterprise PKS clusters via CSE.

It’s important to note that CSE 2.0 has a dependency on Python 3.7.3 or later. I had some trouble managed different versions of Python3 on the CentOS host I used to support the CSE server component. I wanted to document my steps in creating a virtual environment via virtualenv utilizing Python 3.7.3 and installing CSE Server 2.0 within the virtual environment.

virtualenv is a tool to create isolated Python environments. virtualenv creates a folder which contains all the necessary executables to use the packages that a Python project would need. This is useful in my situation as I had various versions of Python 3 installed on my CentOS server and I wanted to ensure Python 3.7.3 was being utilized exclusively for the CSE installation while not effecting other services running on the server utilizing Python3.

Installing Python 3.7.3 on CentOS

The first thing we need to do is install (and compile) Python 3.7.3 on our CentOS server.

We’ll need some development packages and the GCC compiler installed on the server:

# yum install -y zlib-devel gcc openssl-devel bzip2-devel libffi-devel

Continue reading “Creating a virtualenv with Python 3.7.3”

Creating a PvDC for Enterprise PKS in vCloud Director

If you read up on my recent blog post regarding RBAC in the new release of VMware’s Container Service Extension for vCloud Director, you may have noticed that I mentioned a follow-up post regarding the steps required to add an Enterprise PKS controlled vCenter Server to vCloud Director. I wanted to take a little bit of time to go through that process as it’s a relatively new workflow.

First of all, in our lab deployment, we are using an NSX-T backed vSphere environment to provide networking functionality to the Enterprise PKS deployment. As you may know, NSX-T integration is fairly new in the vCloud Director world (and growing every day!). With this in mind, the process of adding the vSphere/NSX-T components into vCD are a little bit different. Let’s have a look at the workflow for creating a Provider Virtual Datacenter (PvDC) that will support our tenant using CSE to provision Enterprise PKS kubernetes clusters.

Logging into the HTML5 vCloud Director Admin Portal

The first point to note is that we can only add a vSphere environment backed by NSX-T in the HTML5 admin portal in the current release of vCD (9.7 at the time of writing). Let’s navigate to https://vcd-director-url.com/provider and login:

Continue reading “Creating a PvDC for Enterprise PKS in vCloud Director”

Implementing RBAC with VMware’s Container Service Extension 2.0 for vCloud Director

In case you haven’t heard, VMware recently announced the general availability of the Container Service Extension 2.0 release for vCloud Director. The biggest addition of functionality in the 2.0 release is the ability to use CSE to deploy Enterprise PKS clusters via the vcd-cli tool in addition to native, upstream Kubernetes clusters. I’ll be adding a blog post shortly on the process required for enabling your vCD environment to support Enterprise PKS deployments via the Container Service Extension.

Today, we are going to talk about utilizing the RBAC functionality introduced in CSE 1.2.6 to assign different permissions to our tenants to allow them to deploy Enterprise PKS (CSE Enterprise) clusters and/or native Kubernetes clusters (CSE Native). The cloud admin will be responsible for enabling and configuring the CSE service and enabling tenant admin/users to deploy CSE Enterprise or CSE Native clusters in their virtual datacenter(s).

Prerequisites

  • The CSE 2.0 server is installed and configured to serve up native Kubernetes clusters AND Enterprise PKS clusters. Please refer to the CSE documentation for more information on this process.
  • Must have at least two organizations present and configured in vCD. In this example, I’ll be utilizing the following orgs:
    • cse-native-org (native k8 provider)
    • cse-ent-org (PKS Enterprise k8 provider)
  • This example also assumes none of the organizations have been enabled for k8 providers up to this point. We will be starting from scratch!

Continue reading “Implementing RBAC with VMware’s Container Service Extension 2.0 for vCloud Director”

Deploying VMware vCloud Director on a Single Virtual Machine with a Single Network Interface

Recently, while testing the new Container Service Extension 2.0 Beta release, I found myself needing a quick (and easily replicable) instantiation of vCloud Director in my lab environment. Being this needed to be deployed in my lab environment, I wanted to do this while using the least amount of resources and virtual machines possible to keep things simple. I decided to deploy a single CentOS virtual machine that housed the postgresdb, rabbitmq server (for my subsequent deployment of CSE), and the actual vCD server itself. I also decided to deploy using a single network interface to keep things simple.

Before we get started, I want to lay out some assumptions I’ve made in this environment that will need to be taken in consideration if you’d like to replicate this deployment as documented:

  • All of my servers hostnames are resolvable (I’m using dnsmasq to easily provide DNS/dhcp support in my lab)

  • I’ve disabled firewalld as well as this lab is completely isolated from outside traffic. This is NOT secure and NOT recommend for a production deployment. See the installation documentation for port requirements for vCD.

  • I’ve also persistently disabled SElinux. Again, this is NOT secure and NOT recommending for production but just wanted one less thing to troubleshoot barring config issues.

  • I’ve configured an NTP server in my lab that all the servers connect to. NTP is a requirement for vCD installation.

  • I am going to use the tooling provided by vCD to create self-signed SSL certs for use with vCD. Again, this is NOT secure and NOT recommending for production, but better suited for quick test deployments in a controlled lab environment.

I’ve configured a CentOS 7.6 server with 4 vCPU, 8GB of memory and a 20GB hard drive. After installation of my OS, I verify the configuration stated above and update my server to the latest and greatest:

yum update -y

Continue reading “Deploying VMware vCloud Director on a Single Virtual Machine with a Single Network Interface”

Welcome!!

Well, here we are. Welcome to mannimal.blog!! 

This space will serve as a place to publish some best practices around solution design for cloud providers looking to build modern platforms based on VMware technology. 

But before we get into that, I wanted to give a little background on myself. My name is Joe Mann and I am a Staff Cloud Solutions Architect at VMware. I cover the VMware Cloud Provider Program (VCPP) with a focus on cloud native technologies. I spent the last 7 years of my career as a Solutions Architect at Red Hat covering the Red Hat Cloud and Service Provider Program. As you can tell, my focus in this space has been helping cloud and service providers build modern infrastructure to support their customers’ ever-evolving needs. 

Stay tuned for some upcoming posts that will focus on vCloud Director install and config as well as a look at the 2.0 Beta release of the Container Service Extension. Thanks for stopping by!!