Implementing RBAC with VMware’s Container Service Extension 2.0 for vCloud Director

In case you haven’t heard, VMware recently announced the general availability of the Container Service Extension 2.0 release for vCloud Director. The biggest addition of functionality in the 2.0 release is the ability to use CSE to deploy Enterprise PKS clusters via the vcd-cli tool in addition to native, upstream Kubernetes clusters. I’ll be adding a blog post shortly on the process required for enabling your vCD environment to support Enterprise PKS deployments via the Container Service Extension.

Today, we are going to talk about utilizing the RBAC functionality introduced in CSE 1.2.6 to assign different permissions to our tenants to allow them to deploy Enterprise PKS (CSE Enterprise) clusters and/or native Kubernetes clusters (CSE Native). The cloud admin will be responsible for enabling and configuring the CSE service and enabling tenant admin/users to deploy CSE Enterprise or CSE Native clusters in their virtual datacenter(s).

Prerequisites

  • The CSE 2.0 server is installed and configured to serve up native Kubernetes clusters AND Enterprise PKS clusters. Please refer to the CSE documentation for more information on this process.
  • Must have at least two organizations present and configured in vCD. In this example, I’ll be utilizing the following orgs:
    • cse-native-org (native k8 provider)
    • cse-ent-org (PKS Enterprise k8 provider)
  • This example also assumes none of the organizations have been enabled for k8 providers up to this point. We will be starting from scratch!

Before We Begin

As noted above, this example assumes we have CSE 2.0 installed already in our environment, but I wanted to take some time to call out the process for enabling RBAC in CSE. When installing CSE, all we need to do to enable RBAC is ensure the enforce_authorization is set to true in the service section of the config.yaml file:

…output omitted…

service:
  enforce_authorization: true
  listeners: 5

…output omitted…

Please note, if we set the flag to false, any user with the ability to create compute resources via vCD will also be able to provision k8 clusters.

Enabling the “cse-native-org” Organization

The first thing we’ll need to do is grant access to the “cse-native-org” to perform CSE Native operations. We’ll first need to login to the vCD instance using the vcd-cli command with a system admin user, then we can add the right to the org.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

Now we can grant the org “cse-native-org” the right to deploy native k8 clusters:

$ vcd right add -o ‘cse-native-org’ "{cse}:CSE NATIVE DEPLOY RIGHT"

At this point, we have enabled the tenant with the ability to provision clusters but is that enough? What happens when we log in and attempt to provision a cluster with a user who belongs to that tenant? We’ll run the create cluster command where test-cluster is the name we assign to our cluster and nodes is the number of worker nodes we’d like to deploy:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights.

Here we see the RBAC feature in action! Because we haven’t added the { cse}:CSE NATIVE DEPLOY RIGHT} right to the role associated with the user, they aren’t allowed to provision k8 clusters. NOTE: If RBAC is not enabled, any user in the org will be able to use CSE to deploy clusters for the cluster type their org is enabled for.

So let’s log back in as the administrator and give our tenant admin user the ability to provision k8 clusters. We have created a role in vCD for this user that mimics the “Organization Admin” permission set and named it cse-admin. The cse-native-admin user has been created with the cse-admin role.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd user create ‘cse-native-admin’ ‘password’ ‘cse-admin’

$ vcd role add-right 'cse-admin' "{cse}:CSE NATIVE DEPLOY RIGHT"

Finally, we need to enable the tenant to support native k8 cluster deployments:

$ vcd cse ovdc enable native-ovdc -o cse-native-org -k native
metadataUpdate: Updating metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664)
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664), result: success

Now that we have given our user the right to create clusters, let’s give the cluster create command another try:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
create_cluster: Creating cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating cluster vApp test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating master node for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Initializing cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating 1 node(s) for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Adding 1 node(s) to test-cluster(7f509a1c-4743-407d-95d3-355883191313)
task: 3de7f52f-e018-4332-9731-a5fc99bde8f8, Created cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313), result: success

Success!! Our user was now able to provision their cluster!! Now we can get some information about the provisioned k8 cluster and grab our k8 cluster config so we can access our new cluster with kubectl:

$ vcd cse cluster info test-cluster
property         value
---------------  -------------------------------------------------------------------------------
cluster_id       7f509a1c-4743-407d-95d3-355883191313
cse_version      2.0.0
leader_endpoint  10.10.10.210
master_nodes     {'name': 'mstr-4su4', 'ipAddress': '10.10.10.210'}
name             test-cluster
nfs_nodes
nodes            {'name': 'node-utcz', 'ipAddress': '10.10.10.211'}
number_of_vms    2
status           POWERED_ON
template         photon-v2
vapp_href        https://vcd.example.com/api/vApp/vapp-065141f8-4c5b-47b5-abee-c89cb504773b
vapp_id          065141f8-4c5b-47b5-abee-c89cb504773b
vdc_href         https:// vcd.example.com/api/vdc/f703babd-8d95-4e37-bbc2-864261f67d51
vdc_name         native_ovdc

$ vcd cse cluster config test-cluster > ~/.kube/config

$ kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
mstr-4su4   Ready    master   1d    v1.10.11
node-utcz   Ready    <none>   1d    v1.10.11

Now we’re ready to provision our first Kubernetes app!

Enabling the “cse-ent-org” Organization

Now, our cloud admin has received a request from another tenant (cse-ent-org) that they would like users in their org to be able to provision Enterprise PKS clusters. Our cloud admin will follow the same workflow documented in the previous example but substitute the “CSE Enterprise” rights for the “CSE Native” rights.

Let’s take a look at what happens if a user in the cse-ent-org tries to login and provision a cluster before our cloud admin has enabled the right to do so:

vcd login vcd.example.com cse-ent-org cse-ent-user -iw
Password:
cse-ent-user logged in, org: 'cse-ent-org', vdc: 'pks-ovdc'

$ vcd cse cluster create test-cluster
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Org VDC is not enabled for Kubernetes cluster deployment

As expected, this errors out because our cloud admin has not enabled the right to deploy k8 clusters in the org. Now, our cloud admin will login and enable the right to deploy Enterprise PKS clusters via CSE in the cse-ent-org tenant:

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd right add "{cse}:PKS DEPLOY RIGHT" -o cse-ent-org
Rights added to the Org 'cse-ent-org'

Just as in the previous example, we need to create a user and a role that will allow our user to provision k8 clusters in this org. We have created a custom role in this example that mimics the vAPP Author permissions and named it pks-k8-role. The role has been assigned to the user that needs to create k8 clusters. Then, we need to give that user role the right to deploy Enterprise PKS clusters:

$ vcd user create ‘cse-ent-user’ ‘password’ ‘pks-k8-role’

$ vcd role add-right "pks-k8-role" "{cse}:PKS DEPLOY RIGHT"

The user in the tenant org has been granted rights by the cloud admin, now we need to enable the org for to allow deployment of Enterprise PKS clusters:

$ vcd cse ovdc enable pks-ovdc -o cse-ent-org -k ent-pks -p "small" -d "test.local" 

metadataUpdate: Updating metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664) 
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664), result: success

Note: When enabling an org for Enterprise PKS, we need to define the plan and the domain to be assigned to the instances (and load-balancer) that PKS will provision. Currently, you can only enable one plan per org, but you can run the above command again with a different plan if you’d like to switch in the future.

It’s also worth mentioning that you can create separate OrgVDCs within the same org and enable one OVDC for Native K8 and the other for Enterprise PKS if users in the same tenant org have different requirements.

Finally, we are ready to provision our PKS cluster. We’ll login as our cse-ent-user and deploy our cluster:

$ vcd login vcd.example.com cse-ent-org cse-ent-user -iw 
Password: 
cse-ent-admin logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks
property                     value
---------------------------  --------------------------------------------------------
compute_profile_name         cp--41e132c6-4480-48b1-a075-31f39b968a50--cse-ent-ovdc-1
kubernetes_master_host       test-cluster-pks.test.local
kubernetes_master_ips        In Progress
kubernetes_master_port       8443
kubernetes_worker_instances  2
last_action                  CREATE
last_action_description      Creating cluster
last_action_state            in progress
name                         test-cluster-pks
pks_cluster_name             test-cluster-pks---5d33175a-3010-425b-aabe-bddbbb689b7e
worker_haproxy_ip_addresses

We can continue to monitor the status of our cluster create with the cluster list or cluster info commands:

$ vcd cse cluster list 

k8s_provider    name              status            vdc
--------------  ----------------  ----------------  --------------
ent-pks         test-cluster-pks  create succeeded  pks-ovdc

Now that we have verified out cluster has been created successfully, we need to obtain the config file so we can access the cluster with kubectl:

$ vcd cse cluster config test-cluster > ~/.kube/config

Now our user is ready to deploy apps on their PKS cluster!

As a final test, let’s see what happens when a user in the same org that we just enabled for Enterprise PKS (cse-ent-org) tries to provision a cluster. This user (vapp-user) has been assigned the “vApp Author” role as it exists “out of the box.”

$ vcd login vcd.example.com cse-ent-org vapp-user -iw 
Password: 
Vapp-user logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks-2 
Usage: vcd cse cluster create [OPTIONS] NAME 
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights. 

There we have it, RBAC in full effect!! The user can not provision a cluster, even though the org is enabled for Enterprise PKS cluster creation, because their assigned role does have the rights to do so.

Conclusion

This was a quick overview of the capabilities provided by the Role Based Access Control functionality present in VMware’s Container Service Extenstion 2.0 for vCloud Director. We were able to allow users in orgs to provision k8 clusters of both the native and Enterprise PKS variants. We also showcased how we can prevent “unprivileged” users in the same org from provisioning k8 clusters as well. Hope you found it useful!!

Leave a Reply

Your email address will not be published.