With the recent release of the Container Service Extension 2.5.0, I wanted to take some time to walk through the installation and configuration of the Container Service Extension (CSE) server in conjunction with VMware vCloud Director 10.
This will be a series of 3 blog posts that cover the following topics:
- Part 1: CSE Software Installation, Server Configuration and Deployment
- Part 2: CSE Client Setup and Kubernetes Cluster Deployment
- Part 3: Configuring CSE Enterprise with VMware Enterprise PKS
Container Service Extension Overview
Before we get started, I wanted to talk a bit about CSE and what purpose it serves in a Service Provider’s environment. The Container Service Extension is a VMware vCloud Director extension that helps tenants create, lifecycle manage, and interact with Kubernetes clusters in vCloud Director-managed environments.
There are currently two versions of CSE: Standard and Enterprise. CSE Standard brings Kubernetes-as-a-Service to vCD by creating customized vApp templates and enabling tenant/organization administrators to deploy fully functional Kubernetes clusters in self-contained vApps. CSE Standard cluster creation can be enabled on existing NSX-V backed OrgVDCs in a tenant’s environment. With the release of CSE Enterprise in the CSE 2.0 release, VMware has also added the ability for tenants to provision VMware Enterprise PKS Kubernetes clusters back by NSX-T resources in vCloud Director managed environments. In this blog post, I am going to focus on the enablement of CSE Standard Kubernetes cluster creation in an existing vCloud Director OvDC.
For more information on CSE, have a look at the Kubernetes-as-a-Service in vCloud Director reference architecture (authored by yours truly 😄) as well as the CSE Installation Documentation.
In order to install CSE 2.5.0, please ensure you review the CSE Server Installation Prerequisites section of the CSE documentation to ensure you have fulfilled all of the vCD specific requirements to support CSE Standard Kubernetes cluster deployment. As mentioned in the aforementioned documentation, VMware recommends utilizing a user with System administrator in the vCD environment for CSE server management.
Along with the prereqs mentioned in the documentation above, please ensure you have a RabbitMQ server available as the CSE server utilizes AMQP as a messaging queue to communicate with the vCD cell, as referenced in the diagram below:
For vCloud Director 10, you will need to deploy RabbitMQ 3.7.x (see vCloud Director Release notes for RabbitMQ compatibility information). For more information on deploying RabbitMQ, please refer to the RabbitMQ installation documentation.
Finally, CSE requires Python 3.7.3 or later at the time of this writing. In this walkthrough, I have chosen to install the CSE Server on a CentOS 7.6 install within a Python 3.7.3 virtual environment but any variant of Linux that supports Python 3.7.3 installations will suffice. For more information on configuring a virtual environment to support a CSE Server installation, see my earlier blog post which walks through the process.
Installing CSE Server 2.5.0
Now that I’ve established the prereqs, I am ready to install the bits that will support the CSE server installation.
Note: The following commands will need to be run on the Linux server hosting the CSE server installation.
First thing’s first, I’ll create a
cse user that I’ll use to manage our CSE server:
# useradd cse # passwd cse # su - cse
Now, after creating our Python 3.7.3 virtual environment, I’ll need to activate it. I created my virtual environment in the
$ source ~/cse-env/bin/activate
Note: After activating the virtual environment, you should see a
(virual-environment-name) appended to the front of your bash prompt to confirm you are operating in the virtual environment.
Now I’m ready to install the CSE server bits within the virtual environment! Utilize
pip to pull down the CSE packages:
$ pip install container-service-extension
Verify CSE is installed and the version is
$ cse version CSE, Container Service Extension for VMware vCloud Director, version 2.5.0
Now I’m ready to build the configuration file and deploy the CSE server!!
Container Service Extension Configuration File
The CSE server utilizes a
yaml config file that contains information about the vCloud Director/vCenter infrastructure that will be supporting the Kubernetes cluster deployments. The config file also contains information regarding the RabbitMQ broker that I configured in Part 1 of the series. This config file will be used to install and run the CSE service on the CSE server.
Before we get started, I wanted to take some time to talk about how CSE deploys Kubernetes clusters. CSE uses customized VM templates (Kubernetes templates) as building blocks for deployment of Kubernetes clusters. These templates are crucial for CSE to function properly. New in version 2.5.0, CSE utilizes “pre-configured” template definitions hosted on a remote repository.
Templates vary by guest OS (e.g. PhotonOS, Ubuntu), as well as software versions, like Kubernetes, Docker, and Weave. Each template name is uniquely constructed based on the flavor of guest OS, Kubernetes, and Weave versions. The definitions of different templates reside in an official location hosted at a remote repository URL. The CSE sample config file, out of the box, points to the official location of those templates definitions. The remote repository is officially managed by maintainers of the CSE project. For more information on template management in CSE, refer to the CSE documentation.
Now that we’ve discussed some of the changes for template management in CSE 2.5.0, I’m ready to start our CSE server installation.
If you’ll remember back to Part 1 of the series, I installed the CSE bits within a Python 3.7.3 virtual environment, so the first thing I’ll do is activate that virtual environment and verify our CSE version:
Note: All commands below should be run from the CSE server CLI.
$ source cse-env/bin/activate $ cse version CSE, Container Service Extension for VMware vCloud Director, version 2.5.0
I’ll use the
cse command to generate a sample file (I’m calling mine
config.yaml) that I can use to build out my config file for my CSE installation:
$ cse sample -o config.yaml
Great! Now I have a skeleton configuration file to use to build out my CSE server config file. Let’s have a look at each section of the config file.
amqp section of the config file contains information about the RabbitMQ AMQP broker that the CSE server will use to communicate with the vCloud Director instance. Let’s have a look at my completed
amqp section below. All of the values used below are from my lab and some will differ for your deployment:
amqp: exchange: cse-exchange <--- RabbitMQ exchange name host: rabbitmq.vcd.zpod.io <--- RabbitMQ hostname password: <password> <--- RabbitMQ user's password port: 5672 <--- RabbitMQ port (default is 5672) prefix: vcd <--- default value, can be left as is routing_key: cse <--- default value, can be left as is ssl: false <--- Set to "true" if using SSL for RabbitMQ connections ssl_accept_all: false <--- Set to "true" if using SSL and utilizing self-signed certs username: cse-amqp <--- RabbitMQ username (with access to the vhost) vhost: / <--- RabbitMQ virtual host that contains the exchange
exchange defined in the file above will be created by the CSE server on install (if it doesn’t already exist). This exchange should NOT be the same one configured in the Extensibility section of the vCD Admin Portal. However, the Extensibility section of the vCD Admin Portal must be configured using the same virtual host (
/ in my example above). See screenshot below for my an example of my vCD Extensibility config:
No manual config is required on the RabbitMQ server side aside from ensuring the RabbitMQ user (
cse-amqp in the example above) has full access to the virtual host. See my previous post on Deploying vCloud Director for information on creating RabbitMQ users.
As you might guess, this section of the config file contains information regarding the vCloud Director instance that CSE will communicate with via the API. Let’s have a look at the
vcd: api_version: '33.0' <--- vCD API version host: director.vcd.pzod.io <--- vCD Hostname log: true <--- Set to "true" to generate log files for CSE/vCD interactions password: my_secret_password <--- vCD system admin's password port: 443 <--- default value, can be left as is unless otherwise needed username: administrator <--- vCD system admin username verify: false <--- Set to "true" to verify SSL certificates
In this section, we define the vCenter instances that are being managed by vCD. CSE needs access to the vCenter appliances in order to perform guest operation modifications, queries, and program execution. In my lab, my vCD deployment is managing 2 vCSA instances. You can add additional if required:
vcs: - name: vc-pks <--- vCenter name as it appears in vCD password: <password> <--- email@example.com's password username: firstname.lastname@example.org <--- vCenter admin's username verify: false <--- Set to "true" to verify SSL certificates - name: vc-standard password: <password> username: email@example.com verify: false
The service section is small and really only has one config decision to make. If the
enforce_authorization flag is set to
false, ANY user that has permissions to create vApps in any Org in the vCD environment can provision Kubernetes clusters via CSE. If set to
true, you can utilize RBAC functionality to grant specific Orgs and specific users within those Orgs rights to create clusters. When set to
enforce_authorization flag defaults to refusing any request to create Kubernetes clusters via CSE unless a user (and its org) has the proper rights assigned to allow the operation. For more information on configuring RBAC, see my previous blog post that walks through RBAC enablement scenarios (although the blog post was authored utilizing CSE 2.0, the constructs have not changed in 2.5.0).
service: enforce_authorization: true listeners: 5 <--- number of threads CSE server can utilize log_wire: false <--- if set to "true", will log all REST calls initiated by CSE to vCD
Here’s where all the magic happens!! The broker sections is where we define where and how the CSE server will deploy the first Kubernetes cluster that will serve as a basis for a vApp template that will be used for tenants’ Kubernetes cluster deployments.
catalogvalue is the name CSE will use when creating a publicly shared catalog within my org for storing the vApp templates(s). The CSE server will create this catalog in vCD when I install the CSE server.
default_template_namevalue is the template name that CSE will use by default when users deploy Kubernetes clusters via CSE without defining a specific template. Refer to the following link from the CSE documentation for available template names and revision numbers.
default_template_revisionvalue is a numerical value associated with the version of the template released by VMware. At the time of writing, all available templates are at
ip_allocation_modevalue is the mode to be used during the install process to build the template. Possible values are
pool. During creation of clusters for tenants,
poolIP allocation mode is always used.
networkvalue is an OrgVDC Network within the OrgVDC that will be used during the install process to build the template. It should have outbound access to the public internet in order to reach the template repository. The CSE server does not need to be connected to this network.
orgvalue is the organization that contains the shared catalog where the Kubernetes vApp templates will be stored.
remote_template_cookbook_urlvalue is the URL of the template repository where all template definitions and associated script files are hosted. This is new in CSE 2.5.0.
storage_profileis the name of the storage profile to use when creating the temporary vApp used to build the Kubernetes cluster vApp template.
vdcvalue is the virtual datacenter within the
org(defined above) that will be used during the install process to build the vApp template.
Here is an example of the completed
broker: catalog: cse-25 default_template_name: ubuntu-16.04_k8-1.15_weave-2.5.2 default_template_revision: 1 ip_allocation_mode: pool network: outside org: cse_25_test remote_template_cookbook_url: https://raw.githubusercontent.com/vmware/container-service-extension-templates/master/template.yaml storage_profile: '*' vdc: cse_vdc_1
This section is new in CSE 2.5.0 and is entirely optional. The
template_rules section allows system admins to utilize vCD compute policies to limit which users have access to which Kubernetes templates. By default, any user that has access to create Kubernetes clusters via CSE also has access to all templates available. Use the
template_rules section, along with compute policies, to limit which users have access to which Kubernetes templates.
This section points to a seperate
.yaml config file that contains information about a VMware Enterprise PKS deployment if you are intended to utilize CSE Enterprise as well. Refer to [Part 3}(https://mannimal.blog/2019/11/22/container-service-extension-2-5-installation-part-3/) of my series for information on building the
Note: System admins can add CSE Enterprise capabilities via the
pks_config flag at any point after CSE server installation, it does not have to be set on initial install.
pks_config: null <--- Set to name of .yaml config file for CSE Enterprise cluster deployment
Now that I’ve gone over the config file, I am ready to proceed with my installation of the CSE server!!
CSE Server Installation and Validation
Before starting the install, we need to set the correct permissions on the config file:
chmod 600 config.yaml
After building out the config file, I’ll simple need to run the following command to install CSE in the environment. I’ll use the
--skip-template-creation flag to ensure the configuration is sound and install the desired template in a subsequent command:
cse install -c config.yaml --skip-template-creation Required Python version: >= 3.7.3 Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] Validating config file 'config.yaml' Connected to AMQP server (rabbitmq.vcd.zpod.io:5672) InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. Connected to vCloud Director (director.vcd.zpod.io:443) Connected to vCenter Server 'vc-standard' as 'firstname.lastname@example.org' (vcsa.vcd.zpod.io:443) Connected to vCenter Server 'vc-pks' as 'email@example.com' (vcsa.pks.zpod.io:443) Config file 'config.yaml' is valid Installing CSE on vCloud Director using config file 'config.yaml' Connected to vCD as system administrator: director.vcd.zpod.io:443 Checking for AMQP exchange 'cse-exchange' AMQP exchange 'cse-exchange' is ready Updated cse API Extension in vCD Right: CSE NATIVE DEPLOY RIGHT added to vCD Right: CSE NATIVE DEPLOY RIGHT assigned to System organization. Right: PKS DEPLOY RIGHT added to vCD Right: PKS DEPLOY RIGHT assigned to System organization. Created catalog 'cse-25' Skipping creation of templates
Great!! I’ve installed the CSE Server. Now I’m ready to deploy a Kubernetes cluster vApp template into my
cse-25 catalog. I can obtain a template name from the Template Announcement section of the CSE documentation. I can also use the following
cse command from the CLI of the CSE server to query available templates:
$ cse template list -d remote
I can also define an ssh-key that will be injected into the VMs that are provisioned to act as the Kubernetes nodes with the
--ssh-key flag. The system admin could then use the private ssh-key to access the OS of the Kubernetes nodes’ operating system via SSH. I’ll use the following
cse command to install the Ubuntu Kubernetes template:
$ cse template install ubuntu-16.04_k8-1.15_weave-2.5.2 –-ssh-key id_rsa.pub
This command pulls down a Ubuntu OVA to the CSE server and then pushes it to the vCD environment, creates a set of VMs, and performs all required post provisioning customization to create a functioning Kubernetes cluster.
After the Kubernetes cluster is created, CSE creates a vApp template based on the cluster and then deletes the running cluster from the environment. This vApp template will then be used by CSE to create Kubernetes clusters when tenants use the
vcd-cli to create clusters.
Now I’m finally ready to test our install with the
cse run command, which will run the CSE service in the current bash shell:
$ cse run ---output omitted--- AMQP exchange 'vcd' exists CSE on vCD is currently enabled Found catalog 'cse-25' CSE installation is valid Started thread 'MessageConsumer-0 (140180650903296)' Started thread 'MessageConsumer-1 (140180417672960)' Started thread 'MessageConsumer-2 (140180634117888)' Started thread 'MessageConsumer-3 (140180642510592)' Started thread 'MessageConsumer-4 (140180409280256)' Container Service Extension for vCloud Director Server running using config file: config.yaml Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log waiting for requests (ctrl+c to close)
Awesome!! We can see the AMQP threads are created in the output and the server is running using my config file. Use
ctrl+c to stop the service and return to the command prompt.
Controlling the CSE Service with
As you can see above, I can manually run the CSE Server with the
cse run command, but it makes more sense to be able to automate the starting and stopping of the CSE service. To do that, I’ll create a systemd unit file and manage the CSE service via
First, I’ll need to create a script that the systemd unit file will refer to in order to start the service. My virtual environment is located at
/home/cse/cse-env and my CSE config file is located at
vi to create the
$ vi ~/cse.sh
And add the following text to the new file and save:
#!/usr/bin/env bash source /home/cse/cse-env/bin/activate cse run -c /home/cse/config.yaml
Now that I’ve created the start script, I need to create a unit file for systemd. I’ll access the root user on the CSE server:
$ su -
Now I’m ready to create the unit file. I’ll use
vi to create the
# vi /etc/systemd/system/cse.service
And add the following text to the file:
[Service] ExecStart=/bin/sh /home/cse/cse.sh Type=simple User=cse WorkingDirectory=/home/cse Restart=always [Install] WantedBy=multi-user.target
After adding the unit file, I’ll need to reload the
# systemctl daemon-reload
Now I’ll start the CSE service and enable it to ensure it starts automatically on boot:
# systemctl start cse # systemctl enable cse
Finally, I’ll check the status of the service to ensure it is active and verify we see the messaging threads:
# service cse status Redirecting to /bin/systemctl status cse.service ● cse.service Loaded: loaded (/etc/systemd/system/cse.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2019-10-10 17:00:50 EDT; 13s ago Main PID: 9621 (sh) CGroup: /system.slice/cse.service ├─9621 /bin/sh /home/cse/cse.sh └─9624 /home/cse/cse-ga/bin/python3.7 /home/cse/cse-ga/bin/cse run -c /home/cse/config.yaml Oct 10 17:00:59 cse-25.vcd.zpod.io sh: CSE installation is valid Oct 10 17:01:00 cse-25.vcd.zpod.io sh: Started thread 'MessageConsumer-0 (139712918025984)' Oct 10 17:01:00 cse-25.vcd.zpod.io sh: Started thread 'MessageConsumer-1 (139712892847872)' Oct 10 17:01:00 cse-25.vcd.zpod.io sh: Started thread 'MessageConsumer-2 (139712901240576)' Oct 10 17:01:01 cse-25.vcd.zpod.io sh: Started thread 'MessageConsumer-3 (139712909633280)' Oct 10 17:01:01 cse-25.vcd.zpod.io sh: Started thread 'MessageConsumer-4 (139712882005760)' Oct 10 17:01:01 cse-25.vcd.zpod.io sh: Container Service Extension for vCloud Director Oct 10 17:01:01 cse-25.vcd.zpod.io sh: Server running using config file: /home/cse/config.yaml Oct 10 17:01:01 cse-25.vcd.zpod.io sh: Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log Oct 10 17:01:01 cse-25.vcd.zpod.io sh: waiting for requests (ctrl+c to close)
Success!! Now I’m ready to start interacting with the CSE server with the CSE client via the
In Part 1 of my series on CSE Installation, I detailed the steps required to install the CSE 2.5.0 bits within a Python 3.7.3 virtual environment. I also took a detailed look at the configuration file used to power the CSE Server before installing and running the server itself.
Join me in Part 2 of this series on the Container Service Extension where I’ll walk through configuring a tenant to allow provisioning of Kubernetes cluster via the CSE extension in