Deploying Kubeapps and Exposing the Dashboard via Ingress Controller in Enterprise PKS

In this post, I’d like to take some time to walk through the process of deploying Kubeapps in an Enterprise PKS kubernetes cluster. I’ll also walk through the process of utilizing the built-in ingress controller provided by NSX-T to expose the Kubeapps dashboard via a fully qualified domain name.

What is Kubeapps?

There’s been a lot of excitement in the Cloud Native space at VMware since the acquisition of Bitnami last year. The Bitnami team has done a lot of amazing work over the years to simplify the process of application deployment across all types of infrastructure, both in public and private clouds. Today we are going to take a look at Kubeapps. Kubeapps, an open source project developed by the folks at Bitnami, is a web-based UI for deploying and managing applications in Kubernetes clusters. Kubeapps allows users to:

  • Browse and deploy Helm charts from chart repositories
  • Inspect, upgrade and delete Helm-based applications installed in the cluster
  • Add custom and private chart repositories (supports ChartMuseum and JFrog Artifactory)
  • Browse and provision external services from the Service Catalog and available Service Brokers
  • Connect Helm-based applications to external services with Service Catalog Bindings
  • Secure authentication and authorization based on Kubernetes Role-Based Access Control

Assumptions/Pre-reqs

Before we get started, I wanted to lay out some assumptions and pre-reqs regarding the environment I’m using to support this Kubeapps deployment. First, some info about the infrastructure I’m using to support my kubernetes cluster:

  • vSphere 6.7u2
  • NSX-T 2.4
  • Enterprise PKS 1.4.1
  • vSphere Cloud Provider configured for persistent storage
  • A wildcard DNS entry to support your app ingress strategy

I’m also making the assumption that you have Helm installed on your kubernetes cluster as well. Helm is a package manager for kubernetes. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. Kubeapps uses Helm charts to deploy application stacks to kubernetes clusters so Helm must be deployed in the cluster prior to deploying Kubeapps. In this tutorial, we’re actually going to deploy kubeapps via the helm chart as well!

Finally, in order for Kubeapps to be able to deploy applications into the cluster, we will need to create a couple of Kubernetes RBAC resources. First, we’ll create a serviceaccount (called kubeapps-operator) and attach a clusterrole to the serviceaccount via a clusterrolebinding to allow the service account to deploy apps in the cluster. For the sake of simplicity, we are going to assign this service account cluster-admin privileges. This means the kubeapps-operator service account has the highest level of access to the kubernetes cluster. This is NOT recommended in production environments. I’ll be publishing a follow-up post on best practices for deploying Helm and Kubeapps in a production environment soon. Stay tuned!

Preparing the Cluster for a Kubeapps Deployment

This first thing we’ll want to do is add the Bitnami repo to our Helm configuration, as the Bitnami repo houses the Kubeapps Helm chart:

$ helm repo add bitnami https://charts.bitnami.com/bitnami

Now that we’ve added the repo, let’s create a namespace for our Kubeapps deployment to live in:

$ kubectl create ns kubeapps

Now we’re ready to create our serviceaccount and attach our clusterole to it:

$ kubectl create serviceaccount kubeapps-operator 
$ kubectl create clusterrolebinding kubeapps-operator \
--clusterrole=cluster-admin \
--serviceaccount=default:kubeapps-operator

Let’s use Helm to deploy our Kubeapps application!!

helm install --name kubeapps --namespace kubeapps bitnami/kubeapps \
--set mongodb.securityContext.enabled=false \
--set mongodb.mongodbEnableIPv6=false

Note, we could opt to set frontend.service.type=LoadBalancer if we wanted to utilize the Enterprise PKS/NSX-T integration to expose the dashboard via a dedicated IP but since we’re going to use an Ingress controller (also provided by NSX-T), we’ll leave that option out.

After a minute or two, we can check what was deployed via the Kubeapps Helm chart and ensure all the pods are available:

$ kubectl get all -n kubeapps

Exposing the Kubeapps Dashboard via FQDN

Our pods and services are now available, but we haven’t exposed the dashboard for access from outside of the cluster yet. For that, we need to create an ingress resource. If you review the output from the screenshot above, the kubeapps service, of type ClusterIP, is serving out our dashboard on port 80. The kubernetes service type of ClusterIP only exposes our service internally within the cluster so we’ll need to create an ingress resource that targets this service on port 80 so we can expose the dashboard to external users.

Part of the Enterprise PKS and VMware NSX-T integration provides an ingress controller per kubernetes cluster provisioned. This ingress controller is actually an L7 Load Balancer in NSX-T primitives. Any time we create an ingress service type in our Enterprise PKS kubernetes cluster, NSX-T automatically creates an entry in the L7 load balancer to redirect traffic, based on hostname, to the correct services/pods in the cluster.

As mentioned in the Pre-reps section, I’ve got a wildcard DNS entry that redirects *.prod.example.com to the IP address of the NSX-T L7 Load Balancer. This will allows my developers to use the native kubernetes ingress services to define the hostname of their applications without having to work with me or my infrastructure team to manually update DNS records every time they want to expose an application to the public.

Enough talk, let’s deploy our ingress controller! I’ve used the .yaml file below to expose my Kubeapps dashboard at kubeapps.prod.example.com:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubeapps-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: kubeapps.prod.example.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: kubeapps 
          servicePort: 80

As we can see, we are telling the Ingress service to target the kubeapps service on port 80 to “proxy” the dashboard to the public. Now let’s create that ingress resource:

$ kubectl create -f kubeapps-ingress.yaml -n kubeapps

And review the service to get our hostname and confirm IP address of the NSX-T L7 Load Balancer:

$ kubectl get ing -n kubeapps
NAME               HOSTS                       ADDRESS                     PORTS   AGE
kubeapps-ingress   kubeapps.prod.example.com   10.96.59.106,100.64.32.27   80      96m

Note, the 10.96.59.106 address is the IP of the NSX-T Load Balancer, which is where my DNS wildcard is directing requests to, and the HOSTS entry is the hostname our Kubeapps dashboard should be accessible on. So let’s check it out!

Now we’re ready to deploy applications in our kubernetes cluster with the click of a button!!

Behind the Scenes with NSX-T

So let’s have a look at what’s actually happening in NSX-T and how we can cross reference this with what’s going on with our Kubernetes resources. As I mentioned earlier, any time an Enterprise PKS cluster is provisioned, two NSX-T Load Balancers are created automatically:

  • An L4 load balancer that fronts the kubernetes master(s) to expose the kubernetes API to external users
  • An L7 load balancer that acts as the ingress controller for the cluster

So, we’ve created an ingress resource for our Kubeapps dashboard, let’s look at what’s happening in the NSX-T manager.

So let’s navigate to the NSX-T manager, login with our admin credentials and navigate to the Advanced Networking and Security tab. Navigate to Load Balancing and choose the Server Pools tab on the right side of the UI. I’ve queried the PKS API to get the UUID for my cluster (1cd1818c...), which corresponds with the LB we want to inspect (Note: you’ll see two LB entries for the UUID mentioned, one for kubernetes API, the other for the ingress controller):

Select the Load Balancer in question and then select the Pool Members option on the right side of the UI:

This will show us two kubernetes pods and their internal IP addresses. Let’s go back to the CLI and compare this with what we see in the cluster:

$ kubectl get pods -l app=kubeapps -o wide -n kubeapps
NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE                                   
kubeapps-7cd9986dfd-7ghff   1/1     Running   0          124m   172.16.17.6   0faf789a-18db-4b3f-a91a-a9e0b213f310
kubeapps-7cd9986dfd-mwk6j   1/1     Running   0          124m   172.16.17.7   8aa79ec7-b484-4451-aea8-cb5cf2020ab0

So this confirms that our 2 pods serving out our Kubeapps dashboard are being fronted by our L7 Load Balancer in NSX-T.

Conclusion

I know that was a lot to take in but I wanted to make sure to review what the actions we performed in this post:

  • Created a serviceaccount and clusterrolebinding to allow Kubeapps to deploy apps
  • Deployed our Kubeapps application via a Helm Chart
  • Exposed the Kubeapps dashboard for external access via our NSX-T “ingress controller”
  • Verified that Enterprise PKS and NSX-T worked together to automate the creation of all of these network resources to support our applications

As I mentioned above, stay tuned for a follow up post that will detail security implications for deploying Helm and Kubeapps in Production environments. Thanks for reading!!!

Leave a Reply

Your email address will not be published. Required fields are marked *