In my last post, I gave an update and overview regarding the new Container Service Extension 2.6.0 release, including a look at the new CSE UI Plugin for VCD. As I had to perform the operation myself, I wanted to take some time to detail the process for upgrading existing CSE installations to 2.6.0. There aren’t many changes from 2.5.1 to 2.6.0, as far as server installation goes, but there are a few to note.
Create New virtualenv and Install CSE 2.6.0 Bits
I like to utilize python virtual environments with my CSE installs, which allows me to jump back and forth between CSE builds as I am working with the engineering team to test new releases or set up reproducers for customer environments. I recommend usage of the virtual environment tool, at the very least, so you don’t have to wrestle with base Python version compat on your OS. See my post on creating a Python 3.7.3 virtual environment to support CSE server installations here.
So the first thing I’ll do on my CSE server is create that new virtual environment in the
$ mkdir cse-2.6.0 $ python3.7 -m virtualenv cse-2.6.0/ $ source cse-2.6.0/bin/activate
Now I’m ready to install the new build of CSE:
$ pip install container-service-extension $ cse version CSE, Container Service Extension for VMware vCloud Director, version 2.6.0
Note: If you’d like to use the same virtual environment as you used in your previous installation, you simply need to
source that virtual environment and upgrade CSE:
$ pip install --upgrade container-service-extension
The CSE Config File
The config file structure is largely the same as it was in CSE 2.5.1, so I simply copied my original config (
config.yaml) to a new file (
config-260.yaml) to use for my 2.6.0 deployment. In my post on installing CSE 2.5.1, I took a detailed look at the config file and walked through each section and what information is required. Please refer to that post for additional clarification on the fields in the CSE config file.
The one change I needed to make in my existing config file was to add the
telemetry value in the
services stanza, as seen below:
service: enforce_authorization: true listeners: 5 log_wire: false telemetry: enable: true
If telemetry is enabled, anonymized usage data will be sent to VMware for data collection purposes. Set
enable: false to disable telemetry.
CSE Config File Encryption
Another new feature in CSE 2.6.0 is the ability to encrypt the configuration file(s) for more secure CSE server management. So by default, the
cse run command, which is used to start the CSE server service, expects an encrypted file (you can avoid this behavior by using the
I’m going to encrypt my config file for use in my envrionment with the command below. Since I am using CSE to manage a VMware PKS deployment, I will encrypt the
pks-config.yaml file as well (unchanged from CSE 2.5.1). The
cse install and
run commands requires both encryption password to be the same:
$ cse encrypt config-260.yaml --output encrypted-config.yaml Required Python version: >= 3.7.3 Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] Password for config file encryption: <enter password> Encryption successful $ cse encrypt pks-config.yaml --output encrypted-pks-config.yaml Required Python version: >= 3.7.3 Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] Password for config file encryption: <enter password> Encryption successful
I also need to remove read/write permissions of the newly created encrypted config files:
$ chmod 600 encyrpted*
Now, I’m ready to test my new CSE 2.6.0 installation by using the
cse install command, which will verify my configuration:
$ cse install -c encrypted-config.yaml -p encyrpted-pks-config.yaml --skip-template-creation Required Python version: >= 3.7.3 Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] Password for config file decryption: <enter password> Decrypting 'encrypted-config.yaml' Validating config file 'encrypted-config.yaml' ...output omitted... Skipping creation of templates.
cse install command will return me to the prompt after a successful run. Now I’m ready to do a test run of the service by using the
cse run command in my active terminal session. This will run the CSE service in the foreground:
$ cse run -c -c encrypted-config.yaml -p encyrpted-pks-config.yaml Required Python version: >= 3.7.3 Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] Password for config file decryption: <enter password> Decrypting 'encrypted-config.yaml' Validating config file 'encrypted-config.yaml' ...output omitted... CSE installation is valid Started thread 'MessageConsumer-0 (140521677321984)' Started thread 'MessageConsumer-1 (140521599137536)' Started thread 'MessageConsumer-2 (140521590744832)' Started thread 'MessageConsumer-3 (140521582352128)' Started thread 'MessageConsumer-4 (140521573959424)' Container Service Extension for vCloud Director Server running using config file: encrypted-config.yaml Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log waiting for requests (ctrl+c to close)
Great!! Now the CSE server service is running in the foreground of my terminal window and I can type
ctrl+c to kill the process. Best practices would dictate that the CSE server service should be configured to be managed as a
systemd process. In my last post on installing CSE 2.5.1, I detailed the process for setting up a unit file to be able to manage the CSE server service with
systemctl. All I need to do now is create a new
cse-260.sh script, which will utilize the
cse run command above to start the CSE service and edit my
cse.service unit file:
Note: For simplicity sake, I’m going to use the unencrypted config files stored in a separate location for my start script
$ sudo vi /etc/systemd/system/cse.service [Service] ExecStart=/bin/sh /home/config/cse-260.sh <---change made here Type=simple User=cse WorkingDirectory=/home/config Restart=always [Install] WantedBy=multi-user.target $ cat /home/config/cse-260.sh #!/usr/bin/env bash source cse-2.6.0/bin/activate cse run -c config-260.yaml -p pks-config.yaml --skip-config-decryption
Now, after making changes to the unit file, I need to reload the
systemctl daemon for the changes to take effect:
$ sudo systemctl daemon-reload
Now I can start the CSE service with
systemctl and check it’s status. I’ll also enable it to ensure the service automatically starts on boot
$ sudo systemctl start cse $ sudo systemctl status cse ● cse.service Loaded: loaded (/etc/systemd/system/cse.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2020-04-16 15:07:19 EDT; 23s ago Main PID: 3717 (sh) CGroup: /system.slice/cse.service ├─3717 /bin/sh /home/cse/cse-260.sh └─3720 /home/cse/cse-testcode/bin/python3.7 /home/cse/cse-testcode/bin/cse run -c config-260.yaml -p pks-config-... Apr 16 15:07:40 cse-server.cse.zpod.io sh: CSE installation is valid Apr 16 15:07:41 cse-server.cse.zpod.io sh: Started thread 'MessageConsumer-0 (140284180268800)' Apr 16 15:07:41 cse-server.cse.zpod.io sh: Started thread 'MessageConsumer-1 (140284100867840)' Apr 16 15:07:42 cse-server.cse.zpod.io sh: Started thread 'MessageConsumer-2 (140284092475136)' Apr 16 15:07:42 cse-server.cse.zpod.io sh: Started thread 'MessageConsumer-3 (140284084082432)' Apr 16 15:07:42 cse-server.cse.zpod.io sh: Started thread 'MessageConsumer-4 (140284075689728)' Apr 16 15:07:42 cse-server.cse.zpod.io sh: Container Service Extension for vCloud Director Apr 16 15:07:42 cse-server.cse.zpod.io sh: Server running using config file: config-260.yaml Apr 16 15:07:42 cse-server.cse.zpod.io sh: Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log Apr 16 15:07:42 cse-server.cse.zpod.io sh: waiting for requests (ctrl+c to close) $ sudo systemctl enable cse
And there we have it!! I’ve upgraded my CSE 2.5.1 server installation to CSE 2.6.0 and now tenants are able to provision and manage Kubernetes clusters via VMware Cloud Director.
A Note About Existing CSE Clusters
As there are some VCD metadata changes made from CSE 2.5.x to CSE 2.6.0, the system admins will need to perform the
cse convert-cluster command on each cluster that was provisioned prior to the upgrade to CSE 2.6.0. More information about the
convert-cluster command can be found here.
As noted in the link, if the cluster was originally provisioned via CSE 2.5.x, there will be no service interruption as the process simply adds additional VCD metadata tags. However, if the cluster was created via a CSE version prior to 2.5.x, the command will reset the admin password of all nodes in the cluster, which will require a reboot of the nodes. In both scenarios, the ssh keys, if provided during cluster deployment, will be preserved on the nodes.
In my lab environment, I have a single cluster that was provisioned in CSE 2.5.1 named
test. I can run the command below, from the CSE server, to update the metadata tags on that cluster:
$ cse convert-cluster -c config-260.yaml --skip-config-decryption test Required Python version: >= 3.7.3 Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] Validating config file 'config-260.yaml' ---output omitted--- Processing cluster 'test'. Processing metadata of cluster. Finished processing metadata of cluster. Determining if vm 'mstr-g507 needs processing'. Determining if vm 'node-8n2o needs processing'. Successfully processed cluster 'test' Finished Guest customization on all vms.
This action will need to be performed on all of my clusters in the environment to allow tenants to take advantage of new features in CSE 2.6.0, such as in place upgrades.
A Note About Updating Clients
As mentioned in my previous post, CSE 2.6.0 introduces the CSE UI plugin for VCD but you may have some tenants out there who still use the
vcd-cli to interact with CSE. They will also need to update their
container-service-extension python packages to interact with CSE 2.6.0 post upgrade.
For customers with existing deployments:
$ pip uninstall container-service-extension $ pip install container-service-extension
For new clients, I (again) recommend utilizing a virtualenv to manage the client side packages as well:
$ mkdir cse-2.6.0 $ python3.7 -m virtualenv cse-2.6.0/ $ source cse-2.6.0/bin/activate $ pip install container-service-extension
That wraps up my post on upgrading CSE 2.5.x deployments to CSE 2.6.0. I’ll be creating a couple of follow-up posts that walk through some of the additional workflows that have been introduced in CSE 2.6.0, including in-place upgrades of Kubernetes clusters via CSE. Stay tuned…