Creating a PvDC for Enterprise PKS in vCloud Director

If you read up on my recent blog post regarding RBAC in the new release of VMware’s Container Service Extension for vCloud Director, you may have noticed that I mentioned a follow-up post regarding the steps required to add an Enterprise PKS controlled vCenter Server to vCloud Director. I wanted to take a little bit of time to go through that process as it’s a relatively new workflow.

First of all, in our lab deployment, we are using an NSX-T backed vSphere environment to provide networking functionality to the Enterprise PKS deployment. As you may know, NSX-T integration is fairly new in the vCloud Director world (and growing every day!). With this in mind, the process of adding the vSphere/NSX-T components into vCD are a little bit different. Let’s have a look at the workflow for creating a Provider Virtual Datacenter (PvDC) that will support our tenant using CSE to provision Enterprise PKS kubernetes clusters.

Logging into the HTML5 vCloud Director Admin Portal

The first point to note is that we can only add a vSphere environment backed by NSX-T in the HTML5 admin portal in the current release of vCD (9.7 at the time of writing). Let’s navigate to https://vcd-director-url.com/provider and login:

Adding vCenter Server

First, we need to add our vCenter Server (vCSA) that is managed by Enterprise PKS to our vCD environment. Select the menu at the top of the page and select the vSphere Resources option and select the Add option above your list of existing vCSAs:

Next, we will fill out all of the required information vCD requires to connect to our vCSA. After filling out the required information, select Next:

On the NSX-V Manager section, we want to ensure that we disable the Configure Settings option here as we will be utilizing a vSphere environment backed by NSX-T, as opposed to NSX-V. After disabling the NSX-V setting, select Next:

Finally, review the configuration information and select Finish to add the vCSA to your vCD deployment:

Add NSX-T Manager

Now that we’ve adding our vCSA sans NSX-V manager, we need to add our NSX-T manager to our vCD deployment. Select the NSX-T Managers menu from the left side of the portal and then select the Add option to plug our NSX-T Manager information in:

Once we fill out the required information, we can select the Save button to finish the process:

Once we verified the action is successful in the Task menu, we are ready to create our PvDC!

Creating a PvDC with our PKS vCSA and NSX-T Manager

Normally, we would be able to create PvDCs in the WebUI but for PvDCs that are backed by NSX-T, we can only create them via the API. We will use the vcd-cli to accomplish this. First, we need to log in to the as a cloud admin user

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

Now, we use the following command to create our new PvDC where:

"PKS-PVDC" is the name of our new PvDC • "ent-cse-vcsa" is the name of our newly added vCSA • "pks-nsx-t-mgr" is the name of our newly added NSX-T manager • "*" is our storage profile • "pks-cluster" is our resource pool • "--enable" to ensure the PvDC is enabled upon creation

vcd pvdc create PKS-PVDC ent-cse-vcsa -t pks-nsx-t-mgr -s "*" -r pks-cluster -–enable

Now, let’s navigate back to the portal to ensure the PvDC is present and enabled. Select the Cloud Resources options from the top menu and the Provider VDCs option from the left menu:

Create our Organization and Organization Virtual Datacenters

Now that we’ve built our PvDC out, we are ready to create our tenant org and create a virtual datacenter for that tenant to utilize for their Enterprise PKS workloads.

First, navigate to the Organizations option on the left menu and select the Add option above the list of orgs:

Fill out the required information to create the org and select the Create button:

We now need to create an Organization Virtual Datacenter (OvDC) to support our org. Select the Organization VDC option from the left menu and select the New button:

I won’t walk through the options here as it’s well documented but you will need to define your Organization, PvDC, Allocation Model, Allocation Pool, Storage Policies, and Network Pool so users in your tenant org have resources to use when provisioning.

At this point, we have done all the pre-work required and we’re ready to connect this OrgVDC to our Container Service Extension instance and start provisioning our Enterprise PKS clusters in vCD!!

Implementing RBAC with VMware’s Container Service Extension 2.0 for vCloud Director

In case you haven’t heard, VMware recently announced the general availability of the Container Service Extension 2.0 release for vCloud Director. The biggest addition of functionality in the 2.0 release is the ability to use CSE to deploy Enterprise PKS clusters via the vcd-cli tool in addition to native, upstream Kubernetes clusters. I’ll be adding a blog post shortly on the process required for enabling your vCD environment to support Enterprise PKS deployments via the Container Service Extension.

Today, we are going to talk about utilizing the RBAC functionality introduced in CSE 1.2.6 to assign different permissions to our tenants to allow them to deploy Enterprise PKS (CSE Enterprise) clusters and/or native Kubernetes clusters (CSE Native). The cloud admin will be responsible for enabling and configuring the CSE service and enabling tenant admin/users to deploy CSE Enterprise or CSE Native clusters in their virtual datacenter(s).

Prerequisites

  • The CSE 2.0 server is installed and configured to serve up native Kubernetes clusters AND Enterprise PKS clusters. Please refer to the CSE documentation for more information on this process.
  • Must have at least two organizations present and configured in vCD. In this example, I’ll be utilizing the following orgs:
    • cse-native-org (native k8 provider)
    • cse-ent-org (PKS Enterprise k8 provider)
  • This example also assumes none of the organizations have been enabled for k8 providers up to this point. We will be starting from scratch!

Before We Begin

As noted above, this example assumes we have CSE 2.0 installed already in our environment, but I wanted to take some time to call out the process for enabling RBAC in CSE. When installing CSE, all we need to do to enable RBAC is ensure the enforce_authorization is set to true in the service section of the config.yaml file:

…output omitted…

service:
  enforce_authorization: true
  listeners: 5

…output omitted…

Please note, if we set the flag to false, any user with the ability to create compute resources via vCD will also be able to provision k8 clusters.

Enabling the “cse-native-org” Organization

The first thing we’ll need to do is grant access to the “cse-native-org” to perform CSE Native operations. We’ll first need to login to the vCD instance using the vcd-cli command with a system admin user, then we can add the right to the org.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

Now we can grant the org “cse-native-org” the right to deploy native k8 clusters:

$ vcd right add -o ‘cse-native-org’ "{cse}:CSE NATIVE DEPLOY RIGHT"

At this point, we have enabled the tenant with the ability to provision clusters but is that enough? What happens when we log in and attempt to provision a cluster with a user who belongs to that tenant? We’ll run the create cluster command where test-cluster is the name we assign to our cluster and nodes is the number of worker nodes we’d like to deploy:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights.

Here we see the RBAC feature in action! Because we haven’t added the { cse}:CSE NATIVE DEPLOY RIGHT} right to the role associated with the user, they aren’t allowed to provision k8 clusters. NOTE: If RBAC is not enabled, any user in the org will be able to use CSE to deploy clusters for the cluster type their org is enabled for.

So let’s log back in as the administrator and give our tenant admin user the ability to provision k8 clusters. We have created a role in vCD for this user that mimics the “Organization Admin” permission set and named it cse-admin. The cse-native-admin user has been created with the cse-admin role.

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd user create ‘cse-native-admin’ ‘password’ ‘cse-admin’

$ vcd role add-right 'cse-admin' "{cse}:CSE NATIVE DEPLOY RIGHT"

Finally, we need to enable the tenant to support native k8 cluster deployments:

$ vcd cse ovdc enable native-ovdc -o cse-native-org -k native
metadataUpdate: Updating metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664)
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter native-ovdc(dd7d117e-6034-467b-b696-de1b943e8664), result: success

Now that we have given our user the right to create clusters, let’s give the cluster create command another try:

$ vcd login vcd.example.com cse-native-org cse-native-admin -iw
Password:
cse-native-admin logged in, org: ‘cse-native-org', vdc: 'native-ovdc'

$ vcd cse cluster create test-cluster --network intranet --nodes 1
create_cluster: Creating cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating cluster vApp test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating master node for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Initializing cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Creating 1 node(s) for test-cluster(7f509a1c-4743-407d-95d3-355883191313)
create_cluster: Adding 1 node(s) to test-cluster(7f509a1c-4743-407d-95d3-355883191313)
task: 3de7f52f-e018-4332-9731-a5fc99bde8f8, Created cluster test-cluster(7f509a1c-4743-407d-95d3-355883191313), result: success

Success!! Our user was now able to provision their cluster!! Now we can get some information about the provisioned k8 cluster and grab our k8 cluster config so we can access our new cluster with kubectl:

$ vcd cse cluster info test-cluster
property         value
---------------  -------------------------------------------------------------------------------
cluster_id       7f509a1c-4743-407d-95d3-355883191313
cse_version      2.0.0
leader_endpoint  10.10.10.210
master_nodes     {'name': 'mstr-4su4', 'ipAddress': '10.10.10.210'}
name             test-cluster
nfs_nodes
nodes            {'name': 'node-utcz', 'ipAddress': '10.10.10.211'}
number_of_vms    2
status           POWERED_ON
template         photon-v2
vapp_href        https://vcd.example.com/api/vApp/vapp-065141f8-4c5b-47b5-abee-c89cb504773b
vapp_id          065141f8-4c5b-47b5-abee-c89cb504773b
vdc_href         https:// vcd.example.com/api/vdc/f703babd-8d95-4e37-bbc2-864261f67d51
vdc_name         native_ovdc

$ vcd cse cluster config test-cluster > ~/.kube/config

$ kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
mstr-4su4   Ready    master   1d    v1.10.11
node-utcz   Ready    <none>   1d    v1.10.11

Now we’re ready to provision our first Kubernetes app!

Enabling the “cse-ent-org” Organization

Now, our cloud admin has received a request from another tenant (cse-ent-org) that they would like users in their org to be able to provision Enterprise PKS clusters. Our cloud admin will follow the same workflow documented in the previous example but substitute the “CSE Enterprise” rights for the “CSE Native” rights.

Let’s take a look at what happens if a user in the cse-ent-org tries to login and provision a cluster before our cloud admin has enabled the right to do so:

vcd login vcd.example.com cse-ent-org cse-ent-user -iw
Password:
cse-ent-user logged in, org: 'cse-ent-org', vdc: 'pks-ovdc'

$ vcd cse cluster create test-cluster
Usage: vcd cse cluster create [OPTIONS] NAME
Try "vcd cse cluster create -h" for help.

Error: Org VDC is not enabled for Kubernetes cluster deployment

As expected, this errors out because our cloud admin has not enabled the right to deploy k8 clusters in the org. Now, our cloud admin will login and enable the right to deploy Enterprise PKS clusters via CSE in the cse-ent-org tenant:

$ vcd login vcd.example.com System administrator -iw
Password:
administrator logged in, org: 'System', vdc: ''

$ vcd right add "{cse}:PKS DEPLOY RIGHT" -o cse-ent-org
Rights added to the Org 'cse-ent-org'

Just as in the previous example, we need to create a user and a role that will allow our user to provision k8 clusters in this org. We have created a custom role in this example that mimics the vAPP Author permissions and named it pks-k8-role. The role has been assigned to the user that needs to create k8 clusters. Then, we need to give that user role the right to deploy Enterprise PKS clusters:

$ vcd user create ‘cse-ent-user’ ‘password’ ‘pks-k8-role’

$ vcd role add-right "pks-k8-role" "{cse}:PKS DEPLOY RIGHT"

The user in the tenant org has been granted rights by the cloud admin, now we need to enable the org for to allow deployment of Enterprise PKS clusters:

$ vcd cse ovdc enable pks-ovdc -o cse-ent-org -k ent-pks -p "small" -d "test.local" 

metadataUpdate: Updating metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664) 
task: 3a6bf21b-93e9-44c9-af6d-635020957b21, Updated metadata for Virtual Datacenter pks-ovdc(edu4617e-6034-467b-b696-de1b943e8664), result: success

Note: When enabling an org for Enterprise PKS, we need to define the plan and the domain to be assigned to the instances (and load-balancer) that PKS will provision. Currently, you can only enable one plan per org, but you can run the above command again with a different plan if you’d like to switch in the future.

It’s also worth mentioning that you can create separate OrgVDCs within the same org and enable one OVDC for Native K8 and the other for Enterprise PKS if users in the same tenant org have different requirements.

Finally, we are ready to provision our PKS cluster. We’ll login as our cse-ent-user and deploy our cluster:

$ vcd login vcd.example.com cse-ent-org cse-ent-user -iw 
Password: 
cse-ent-admin logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks
property                     value
---------------------------  --------------------------------------------------------
compute_profile_name         cp--41e132c6-4480-48b1-a075-31f39b968a50--cse-ent-ovdc-1
kubernetes_master_host       test-cluster-pks.test.local
kubernetes_master_ips        In Progress
kubernetes_master_port       8443
kubernetes_worker_instances  2
last_action                  CREATE
last_action_description      Creating cluster
last_action_state            in progress
name                         test-cluster-pks
pks_cluster_name             test-cluster-pks---5d33175a-3010-425b-aabe-bddbbb689b7e
worker_haproxy_ip_addresses

We can continue to monitor the status of our cluster create with the cluster list or cluster info commands:

$ vcd cse cluster list 

k8s_provider    name              status            vdc
--------------  ----------------  ----------------  --------------
ent-pks         test-cluster-pks  create succeeded  pks-ovdc

Now that we have verified out cluster has been created successfully, we need to obtain the config file so we can access the cluster with kubectl:

$ vcd cse cluster config test-cluster > ~/.kube/config

Now our user is ready to deploy apps on their PKS cluster!

As a final test, let’s see what happens when a user in the same org that we just enabled for Enterprise PKS (cse-ent-org) tries to provision a cluster. This user (vapp-user) has been assigned the “vApp Author” role as it exists “out of the box.”

$ vcd login vcd.example.com cse-ent-org vapp-user -iw 
Password: 
Vapp-user logged in, org: 'cse-ent-org', vdc: ‘pks-ovdc’

$ vcd cse cluster create test-cluster-pks-2 
Usage: vcd cse cluster create [OPTIONS] NAME 
Try "vcd cse cluster create -h" for help.

Error: Access Forbidden. Missing required rights. 

There we have it, RBAC in full effect!! The user can not provision a cluster, even though the org is enabled for Enterprise PKS cluster creation, because their assigned role does have the rights to do so.

Conclusion

This was a quick overview of the capabilities provided by the Role Based Access Control functionality present in VMware’s Container Service Extenstion 2.0 for vCloud Director. We were able to allow users in orgs to provision k8 clusters of both the native and Enterprise PKS variants. We also showcased how we can prevent “unprivileged” users in the same org from provisioning k8 clusters as well. Hope you found it useful!!

Deploying VMware vCloud Director on a Single Virtual Machine with a Single Network Interface

Recently, while testing the new Container Service Extension 2.0 Beta release, I found myself needing a quick (and easily replicable) instantiation of vCloud Director in my lab environment. Being this needed to be deployed in my lab environment, I wanted to do this while using the least amount of resources and virtual machines possible to keep things simple. I decided to deploy a single CentOS virtual machine that housed the postgresdb, rabbitmq server (for my subsequent deployment of CSE), and the actual vCD server itself. I also decided to deploy using a single network interface to keep things simple.

Before we get started, I want to lay out some assumptions I’ve made in this environment that will need to be taken in consideration if you’d like to replicate this deployment as documented:

  • All of my servers hostnames are resolvable (I’m using dnsmasq to easily provide DNS/dhcp support in my lab)

  • I’ve disabled firewalld as well as this lab is completely isolated from outside traffic. This is NOT secure and NOT recommend for a production deployment. See the installation documentation for port requirements for vCD.

  • I’ve also persistently disabled SElinux. Again, this is NOT secure and NOT recommending for production but just wanted one less thing to troubleshoot barring config issues.

  • I’ve configured an NTP server in my lab that all the servers connect to. NTP is a requirement for vCD installation.

  • I am going to use the tooling provided by vCD to create self-signed SSL certs for use with vCD. Again, this is NOT secure and NOT recommending for production, but better suited for quick test deployments in a controlled lab environment.

I’ve configured a CentOS 7.6 server with 4 vCPU, 8GB of memory and a 20GB hard drive. After installation of my OS, I verify the configuration stated above and update my server to the latest and greatest:

yum update -y

Installing PostgreSQL

At this point, we are ready to install and configure our PostgreSQL database (note: vCD requires PostgreSQL 10).

First, we’ll need to configure our server to have access to the PostgreSQL repo:

# rpm -Uvh https://yum.postgresql.org/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

Now that we have configured the repo, we need to install the PostgreSQL 10 packages:

# yum install -y postgresql10-server postgresql10

Now that the database packages are installed, we need to initialize the database, start the service, and ensure it starts automatically at boot:

# /usr/pgsql-10/bin/postgresql-10-setup initdb
# systemctl start postgresql-10.service
# systemctl enable postgresql-10.service

Now that Postgres is installed, let’s verify the installation by logging in to the database with the “postgres” user (created during installation) and set the password:

# su - postgres -c "psql"

psql (10.0)
Type "help" for help.
postgres=# \password postgres
**enter pw at prompt**
postgres=# \q

We can run the createuser command as the postgres OS user to create the vcloud postgres user:

# su - postgres
-bash-4.2$ createuser vcloud --pwprompt

Log back into the psql prompt to create the database the vCD instance will utilize (vcloud), as well as setting the vcloud user password:

-bash-4.2$ psql
postgres=# create database vcloud owner vcloud;
CREATE DATABASE
postgres=# alter user vcloud password ‘your-password’;
ALTER ROLE

Next, we’ll need allow our vcloud user to login to the database:

postgres=# alter role vcloud with login;
ALTER ROLE
postgres=# \q

Finally, we need to allow logins to the Postgres DB with a username/pw combination. Since I’m deploying this in a controlled lab environment, I’m going to open connections up to all IP addresses. Add the following lines to the bottom of the ~/10/data/pg_hba.conf file (editing as the postgres user):

-bash-4.2$ vi ~/10/data/pg_hba.conf

host all all 0.0.0.0/0 md5

We also need to ensure that the database is listening for connections. Edit the postgresql.conf file and ensure the following line is not commented out and change ‘localhost’ to ‘*’:

-bash-4.2$ vi 10/data/postgresql.conf

listen_addresses = '*'

Now that we’ve made these changes, return to the root user and restart the psql service:

-bash-4.2$ exit
# systemctl restart postgresql-10

Installing RabbitMQ

Now that we’ve got our PostgreSQL DB configured, we need to configure RabbitMQ on the server. AMQP, the Advanced Message Queuing Protocol, is an open standard for message queuing that supports flexible messaging for enterprise systems. vCloud Director uses the RabbitMQ AMQP broker to provide the message bus used by extension services, object extensions, and notifications.

On our CentOS install, we need to configure access to the EPEL repo, which provides packages and dependencies we’ll need to install RabbitMQ. After configuring the repo, we need to install Erlang, which is the language RabbitMQ is written in:

# yum -y install epel-release
# yum -y install erlang socat

For linux installs, RabbitMQ provides an RPM which is precompiled and can be installed directly on the server (once ‘erlang’ in installed). Download and install RabbitMQ via the commands below:

# wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.10/rabbitmq-server-3.6.10-1.el7.noarch.rpm
# rpm -Uvh rabbitmq-server-3.6.10-1.el7.noarch.rpm

Now that we have installed RabbitMQ on the server, we are ready to start the RabbitMQ server, ensure it automatically starts on boot, and verify that status of the service:

# systemctl start rabbitmq-server
# systemctl enable rabbitmq-server
# systemctl status rabbitmq-server

Once we’ve verified the status of the RabbitMQ service is “active,” we need to set up an admin user (I’ve used admin in the case, but you can configure any username you’d like) to allow connections to the queue from vCD:

# rabbitmq-plugins enable rabbitmq_management
**output omitted**
# chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/
# rabbitmqctl add_user admin **your-password**
Creating user "admin"
# rabbitmqctl set_user_tags admin administrator
Setting tags for user "admin" to [administrator]
# rabbitmqctl set_permissions -p / admin ".*" ".*" ".*"
Setting permissions for user "admin" in vhost "/"

Installing vCloud Director

We’ve got PostgreSQL and RabbitMQ configured on the server, now we are ready to pull down and install the vCD binary. I’ve pulled the vCD install package directly from MyVMware down to my local desktop and copied the file over to my vCD server at /vcloud.bin and modified permissions so I can execute the script. Before we run the script, we need to install a couple of dependencies the script requires to run to completion:

# yum install libXdmcp libXtst redhat-lsb -y

Now we are ready to run the installation script. After the script finishes, decline the option to run the configure script as we will do this manually later:

# chmod u+x /vcloud.bin
# ./vcloud.bin

**output omitted**

Would you like to run the script now? (y/n)? N

Now that we’ve installed the vCD packages, we can use the tooling provided to generate self-signed certificates. If you have existing certs or you’d like to create and sign your own certs, please refer to the installation documentation for the proper prodecure to create signed certs or upload existing certs. The following command creates certificates for the http and console proxy services and stores them in a keystore file at /tmp/cell.ks with a password of mypassword

# cd /opt/vmware/vcloud-director/bin
# ./cell-management-tool generate-certs -j -p -o /tmp/cell.ks -w mypassword

We can verify the keystore contains 2 keys with the following command:

# /opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS \
-storepass mypassword -keystore /tmp/cell.ks -list
**output omitted**

consoleproxy, May 6, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA1): 7B:FB...
http, May 6, 2019, PrivateKeyEntry,
Certificate fingerprint (SHA1): 14:DD…

Configuring vCloud Director

Now that we have created our certs, we are ready to configure the vCD server. Since we are using the same interface for http and console proxy, we need to perform an unattended install and define ports for each service. For details on this process, see the installation documentation section for unattended installations. As an example, the following command configures both http and console proxy on the same IP (10.10.10.100), using the default port 443 for secure http access while using 8443 for secure console access. We also define the keystore, created earlier as well as the password for that keystore.

First, let’s change directory into the location of the configure script:

# cd /opt/vmware/vcloud-director/bin

Now we are ready to run the configure command:

# ./configure -ip 10.10.10.100 -cons 10.10.10.100 --primary-port-http 80 \
--console-proxy-port-https 8443 -dbtype postgres \
-dbhost 10.10.10.100 -dbname vcloud -dbuser vcloud \
-dbpassword **db-password** -k /tmp/cell.ks -w mypassword \
--enable-ceip false -unattended
......................................../
Database configuration complete.

We can view the logs for the configuration attempt in the directory /opt/vmware/vcloud-director/logs/ at the configure-timestamp location:

# cd /opt/vmware/vcloud-director/logs/
# less configure-timestamp

**outpit omitted**

vCloud Director configuration is now complete.
Once the vCloud Director server has been started you will be able to
access the first-time setup wizard at this URL:
https://FQDN

Before starting the vCD service, we’ll also need to configure a system administrator user using the cell-management-tool. This will allow us to log into the vCloud Director admin portal and being our vCD configuration (you’ll also be asked to specific a password for the system admin user after running the cell-management-tool command):

# cd /opt/vmware/vcloud-director/bin
# ./cell-management-tool system-setup --user admin --full-name "VCD System Administrator" \
--email vcd-admin@example.com --system-name VCD --installation-id 1

where --user is our admin user name, --system-name is the name that is used to create a vCenter folder in each vCenter Server with which it registers, and --installation-id is the numerical id of the specific instance of VCD. For more information on using the cell-management-tool to configure the system admin user, please refer to the VMware documentation.

At this point, we are ready to start the vCD service:

# service vmware-vcd start    

After confirming the service has started, navigate to https://FQDN to begin your vCD configuration!!

Enjoy!!

Welcome!!

Well, here we are. Welcome to mannimal.blog!! 

This space will serve as a place to publish some best practices around solution design for cloud providers looking to build modern platforms based on VMware technology. 

But before we get into that, I wanted to give a little background on myself. My name is Joe Mann and I am a Staff Cloud Solutions Architect at VMware. I cover the VMware Cloud Provider Program (VCPP) with a focus on cloud native technologies. I spent the last 7 years of my career as a Solutions Architect at Red Hat covering the Red Hat Cloud and Service Provider Program. As you can tell, my focus in this space has been helping cloud and service providers build modern infrastructure to support their customers’ ever-evolving needs. 

Stay tuned for some upcoming posts that will focus on vCloud Director install and config as well as a look at the 2.0 Beta release of the Container Service Extension. Thanks for stopping by!!