In Part 1 of my series on deploying Kubernetes clusters to VMware on AWS environments with ClusterAPI Provider vSphere, I detailed the processes required to stand up the CAPV management plane. After completing those steps, I am ready to provision a workload cluster to VMC using CAPV.
Creating Workload Clusters
The CAPV cluster is the brains of the operations but I still need to deploy some workload clusters for my teams of developers to deploy their applications onto. The management cluster helps automate the provisioning of all of the provider components to support my workload clusters as well as instantiating the VMs provisioned as a Kubernetes cluster. The basic use case here is that I, as the infrastructure admin, am responsible for utilizing the CAPV management cluster to provision multiple workload clusters that can support individual teams of developers, individual application deployments, etc. The CAPV management cluster allows me to easily deploy a consistent cluster in a repeatable fashion with very little manual effort. I can quickly deploy a test, dev, prod set of clusters for a team or deploy 5 different workload clusters for 5 different groups of developers.
Continue reading “Kubernetes Cluster Creation in VMware Cloud on AWS with CAPV: Part 2”
One of the biggest challenges in starting a Cloud Native practice is understanding how to establish a repeatable and consistent method of deploying and managing Kubernetes clusters. That’s where ClusterAPI comes in handy!! ClusterAPI (CAPI) is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster. Now you can use Kubernetes to create more Kubernetes!!!!
ClusterAPI is responsible for provisioning all of the infrastructure required to support a Kubernetes cluster. CAPI also provides the ability to perform Day2 operations, such as scaling and upgrading clusters. Most importantly, it provides a consistent management plane to perform these actions on multiple clusters. In fact, ClusterAPI is a big part of what will allow VI admins to orchestrate and automate the provisioning of Kubernetes clusters natively as a part of vSphere with Project Pacific. Learn more about the Project Pacific architecture and how it utilizes ClusterAPI here.
Continue reading “Kubernetes Cluster Creation in VMware Cloud on AWS with CAPV: Part 1”
In Parts 1 and 2 of my series on installing and configuring the Container Service Extension for VMware Cloud Director, I focused on setting the CSE server up to support CSE Standard Kubernetes cluster creation.
CSE Standard clusters are comprised of deployed vApps that utilize NSX-V networking resources, utilizing Weave as the Container Network Interface for the Kubernetes clusters. In Part 3 of my series, I wanted to take some time to look at configuring the CSE Server to support the creation of CSE Enterprise Kubernetes clusters. CSE Enterprise clusters are comprised of VMware Enterprise PKS Kubernetes clusters deployed on top of NSX-T networking resources, utilizing the NSX Container Plugin as a CNI. CSE Enterprise brings enterprise grade features and functionality to CSE that include, but are not limited to:
- HA, multi-master Kubernetes clusters
- Dynamic persistent storage provisioning with the vSphere Cloud Provider integration
- Automated Day 1 and Day 2 Kubernetes cluster management via Bosh Director
- Microsegmentation capability for Kubernentes resources via integration with NSX-T
- Automated creation of Kubernetes service type LoadBalancer and ingress resrouces via NSX-T L4/L7 load balancers
- Support for Harbor, an open source cloud native registry
Continue reading “Container Service Extension 2.5 Installation: Part 3”
If you’ve been following my blog, you know that a lot of the content I publish focuses on VMware’s Container Service Extension and it’s integration with VMware Cloud Director, which allows service providers to create a Kubernetes-as-a-Service experience for their tenants utilizing their existing VCD-managed infrastructure.
Recently, myself and my colleague at VMware, Daniel Paluszek partnered with Nirmata to perform some testing on their new Kubernetes Extension for VMware Cloud Director. The Nirmata Kubernetes Extension for VCD builds on the rich UI experience already present in the VCD tenant portal by providing a workflow for provisioning Kubernetes clusters via CSE using the native UI.
The Native CSE Experience
As I’ve written about in my previous posts on CSE, once a service provider enables a tenant to provision Kubernetes clusters via CSE, tenants will use the vcd-cli with a CSE extension enabled to provision and manage Kubernetes clusters. For example, a tenant would log in to their VCD Org through the
vcd-cli and issue the following command to create a Kubernetes cluster via CSE:
$ vcd cse cluster create k8-cluster-1 --network outside --nodes 1
k8-cluster-1 is the name of the cluster,
--network is the OvDC network the cluster will nodes will utilize, and
--nodes 1 defines the number of worker nodes the cluster will contain.
While many users are familiar enough with a CLI to adapt to this method of resource provisioning, one piece of feedback we get from our partner community is that they’d like to offer a native UI experience in the tenant portal to allow their end customers to more intuitively provision Kubernetes clusters via VCD. That’s where the Nirmata Kubernetes Extension for VCD comes in…
Continue reading “Exploring the Nirmata Kubernetes Extension for VMware Cloud Director”
Building on Part 1 of my series on installing VMware’s Container Service Extension 2.5.0, in this post, I’ll walk through the process of configuring a client server to interact with CSE via the
vcd-cli tool. I’ll also walk through the process of onboarding a tenant as well as the workflow, from the tenant’s perspective, of provisioning and managing a Kubernetes cluster.
Configuring a CSE Client
Now that I’ve deployed my CSE server, I’ll need to utilize the the
vcd-cli tool with the CSE client extension enabled in order to interact with the CSE service. For the client server, I am, again, utilizing a CentOS 7.6 server and a Python 3.7.3 virtual environment to install and utilize the
vcd-cli tool in this walkthrough.
The first thing I’ll need to do is create and activate my virtual environment, which I will install in the
$ python3.7 -m virtualenv ~/cse-client
$ source ~/cse-client/bin/activate
Continue reading “Container Service Extension 2.5 Installation: Part 2”
With the recent release of the Container Service Extension 2.5.0, I wanted to take some time to walk through the installation and configuration of the Container Service Extension (CSE) server in conjunction with VMware vCloud Director 10.
This will be a series of 3 blog posts that cover the following topics:
Container Service Extension Overview
Before we get started, I wanted to talk a bit about CSE and what purpose it serves in a Service Provider’s environment. The Container Service Extension is a VMware vCloud Director extension that helps tenants create, lifecycle manage, and interact with Kubernetes clusters in vCloud Director-managed environments.
There are currently two versions of CSE: Standard and Enterprise. CSE Standard brings Kubernetes-as-a-Service to vCD by creating customized vApp templates and enabling tenant/organization administrators to deploy fully functional Kubernetes clusters in self-contained vApps. CSE Standard cluster creation can be enabled on existing NSX-V backed OrgVDCs in a tenant’s environment. With the release of CSE Enterprise in the CSE 2.0 release, VMware has also added the ability for tenants to provision VMware Enterprise PKS Kubernetes clusters back by NSX-T resources in vCloud Director managed environments. In this blog post, I am going to focus on the enablement of CSE Standard Kubernetes cluster creation in an existing vCloud Director OvDC.
For more information on CSE, have a look at the Kubernetes-as-a-Service in vCloud Director reference architecture (authored by yours truly 😄) as well as the CSE Installation Documentation.
Continue reading “Container Service Extension 2.5 Installation: Part 1”
In this post, I’m going to walk through the process of installing and using Velero v1.1 to back up a Kubernetes application that includes persistent data stored in
persisentvolumes. I will then simulate a DR scenario by completely deleting the application and using Velero to restore the application to the cluster, including the persistent data.
Meet Velero!! ⛵
Velero is a backup and recovery solution built specifically to assist in the backup (and migration) of Kubernetes applications, including their persistent storage volumes. You can even use Velero to back up an entire Kubernetes cluster for restore and/or migration! Velero address various use cases, including but not limited to:
- Taking backups of your cluster to allow for restore in case of infrastructure loss/corruption
- Migration of cluster resources to other clusters
- Replication of production cluster/applications to dev and test clusters
Velero is essentially comprised of two components:
- A server that runs as a set of resources with your Kubernetes cluster
- A command-line client that runs locally
Velero also supports the back up and restore of Kubernetes volumes using restic, an open source backup tool. Velero will need to utilize a S3 API-compatible storage server to store these volumes. To satisfy this requirement, I will also deploy a Minio server in my Kubernetes cluster so Velero is able to store my Kubernetes volume backups. Minio is a light weight, easy to deploy S3 object store that you can run on premises. In a production environment, you’d want to deploy your S3 compatible storage solution in another cluster or environment to prevent from total data loss in case of infrastructure failure.
Continue reading “Backing Up Your Kubernetes Applications with Velero v1.1”
In my last post, I walked through the process of deploying Kubeapps in an Enterprise PKS Kubernetes cluster. In this post, I wanted to examine the workflow required for utilizing Harbor, an open source cloud native registry, as an option to serve out a curated set of Helm charts to developers in an organization. We’ll walk through a couple of scenarios, including configuring a “private” project in Harbor that houses Helm charts and container images for a specific group of developers. Building on my last post, we’ll also add this new Helm chart repository into our Kubeapps deployment to allow our developers to deploy our curated applications directly from the Kubeapps dashboard.
Harbor is an an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing. Enterprise support for Harbor Container Registry is included with VMware Enterprise PKS.
Continue reading “Using Harbor and Kubeapps to Serve Custom Helm Charts”
In this post, I’d like to take some time to walk through the process of deploying Kubeapps in an Enterprise PKS kubernetes cluster. I’ll also walk through the process of utilizing the built-in ingress controller provided by NSX-T to expose the Kubeapps dashboard via a fully qualified domain name.
What is Kubeapps?
There’s been a lot of excitement in the Cloud Native space at VMware since the acquisition of Bitnami last year. The Bitnami team has done a lot of amazing work over the years to simplify the process of application deployment across all types of infrastructure, both in public and private clouds. Today we are going to take a look at Kubeapps. Kubeapps, an open source project developed by the folks at Bitnami, is a web-based UI for deploying and managing applications in Kubernetes clusters. Kubeapps allows users to:
- Browse and deploy Helm charts from chart repositories
- Inspect, upgrade and delete Helm-based applications installed in the cluster
- Add custom and private chart repositories (supports ChartMuseum and JFrog Artifactory)
- Browse and provision external services from the Service Catalog and available Service Brokers
- Connect Helm-based applications to external services with Service Catalog Bindings
- Secure authentication and authorization based on Kubernetes Role-Based Access Control
Continue reading “Deploying Kubeapps and Exposing the Dashboard via Ingress Controller in Enterprise PKS”
As I’ve mentioned in recent posts, VMware’s Container Service Extension 2.0 (CSE) has recently been released. The big news around the 2.0 release is the ability to provision Enterprise PKS clusters via CSE.
It’s important to note that CSE 2.0 has a dependency on Python 3.7.3 or later. I had some trouble managed different versions of Python3 on the CentOS host I used to support the CSE server component. I wanted to document my steps in creating a virtual environment via
virtualenv utilizing Python 3.7.3 and installing CSE Server 2.0 within the virtual environment.
virtualenv is a tool to create isolated Python environments. virtualenv creates a folder which contains all the necessary executables to use the packages that a Python project would need. This is useful in my situation as I had various versions of Python 3 installed on my CentOS server and I wanted to ensure Python 3.7.3 was being utilized exclusively for the CSE installation while not effecting other services running on the server utilizing Python3.
Installing Python 3.7.3 on CentOS
The first thing we need to do is install (and compile) Python 3.7.3 on our CentOS server.
We’ll need some development packages and the
GCC compiler installed on the server:
# yum install -y zlib-devel gcc openssl-devel bzip2-devel libffi-devel
Continue reading “Creating a virtualenv with Python 3.7.3”