Container Service Extension 2.5 Installation: Part 1

With the recent release of the Container Service Extension 2.5.0, I wanted to take some time to walk through the installation and configuration of the Container Service Extension (CSE) server in conjunction with VMware vCloud Director 10.

This will be a series of 3 blog posts that cover the following topics:

Container Service Extension Overview

Before we get started, I wanted to talk a bit about CSE and what purpose it serves in a Service Provider’s environment. The Container Service Extension is a VMware vCloud Director extension that helps tenants create, lifecycle manage, and interact with Kubernetes clusters in vCloud Director-managed environments.

There are currently two versions of CSE: Standard and Enterprise. CSE Standard brings Kubernetes-as-a-Service to vCD by creating customized vApp templates and enabling tenant/organization administrators to deploy fully functional Kubernetes clusters in self-contained vApps. CSE Standard cluster creation can be enabled on existing NSX-V backed OrgVDCs in a tenant’s environment. With the release of CSE Enterprise in the CSE 2.0 release, VMware has also added the ability for tenants to provision VMware Enterprise PKS Kubernetes clusters back by NSX-T resources in vCloud Director managed environments. In this blog post, I am going to focus on the enablement of CSE Standard Kubernetes cluster creation in an existing vCloud Director OvDC.

For more information on CSE, have a look at the Kubernetes-as-a-Service in vCloud Director reference architecture (authored by yours truly πŸ˜„) as well as the CSE Installation Documentation.

Prerequisites

In order to install CSE 2.5.0, please ensure you review the CSE Server Installation Prerequisites section of the CSE documentation to ensure you have fulfilled all of the vCD specific requirements to support CSE Standard Kubernetes cluster deployment. As mentioned in the aforementioned documentation, VMware recommends utilizing a user with System administrator in the vCD environment for CSE server management.

Along with the prereqs mentioned in the documentation above, please ensure you have a RabbitMQ server available as the CSE server utilizes AMQP as a messaging queue to communicate with the vCD cell, as referenced in the diagram below:

For vCloud Director 10, you will need to deploy RabbitMQ 3.7.x (see vCloud Director Release notes for RabbitMQ compatibility information). For more information on deploying RabbitMQ, please refer to the RabbitMQ installation documentation.

Finally, CSE requires Python 3.7.3 or later at the time of this writing. In this walkthrough, I have chosen to install the CSE Server on a CentOS 7.6 install within a Python 3.7.3 virtual environment but any variant of Linux that supports Python 3.7.3 installations will suffice. For more information on configuring a virtual environment to support a CSE Server installation, see my earlier blog post which walks through the process.

Installing CSE Server 2.5.0

Now that I’ve established the prereqs, I am ready to install the bits that will support the CSE server installation.

Note: The following commands will need to be run on the Linux server hosting the CSE server installation.

First thing’s first, I’ll create a cse user that I’ll use to manage our CSE server:

# useradd cse
# passwd cse
# su - cse

Now, after creating our Python 3.7.3 virtual environment, I’ll need to activate it. I created my virtual environment in the ~/cse-env directory:

$ source ~/cse-env/bin/activate

Note: After activating the virtual environment, you should see a (virual-environment-name) appended to the front of your bash prompt to confirm you are operating in the virtual environment.

Now I’m ready to install the CSE server bits within the virtual environment! Utilize pip to pull down the CSE packages:

$ pip install container-service-extension

Verify CSE is installed and the version is 2.5.0

$ cse version
CSE, Container Service Extension for VMware vCloud Director, version 2.5.0

Now I’m ready to build the configuration file and deploy the CSE server!!

Container Service Extension Configuration File

The CSE server utilizes a yaml config file that contains information about the vCloud Director/vCenter infrastructure that will be supporting the Kubernetes cluster deployments. The config file also contains information regarding the RabbitMQ broker that I configured in Part 1 of the series. This config file will be used to install and run the CSE service on the CSE server.

Before we get started, I wanted to take some time to talk about how CSE deploys Kubernetes clusters. CSE uses customized VM templates (Kubernetes templates) as building blocks for deployment of Kubernetes clusters. These templates are crucial for CSE to function properly. New in version 2.5.0, CSE utilizes “pre-configured” template definitions hosted on a remote repository.

Templates vary by guest OS (e.g. PhotonOS, Ubuntu), as well as software versions, like Kubernetes, Docker, and Weave. Each template name is uniquely constructed based on the flavor of guest OS, Kubernetes, and Weave versions. The definitions of different templates reside in an official location hosted at a remote repository URL. The CSE sample config file, out of the box, points to the official location of those templates definitions. The remote repository is officially managed by maintainers of the CSE project. For more information on template management in CSE, refer to the CSE documentation.

Now that we’ve discussed some of the changes for template management in CSE 2.5.0, I’m ready to start our CSE server installation.

If you’ll remember back to Part 1 of the series, I installed the CSE bits within a Python 3.7.3 virtual environment, so the first thing I’ll do is activate that virtual environment and verify our CSE version:

Note: All commands below should be run from the CSE server CLI.

$ source cse-env/bin/activate


$ cse version
CSE, Container Service Extension for VMware vCloud Director, version 2.5.0

I’ll use the cse command to generate a sample file (I’m calling mine config.yaml) that I can use to build out my config file for my CSE installation:

$ cse sample -o config.yaml

Great! Now I have a skeleton configuration file to use to build out my CSE server config file. Let’s have a look at each section of the config file.

amqp section

The amqp section of the config file contains information about the RabbitMQ AMQP broker that the CSE server will use to communicate with the vCloud Director instance. Let’s have a look at my completed amqp section below. All of the values used below are from my lab and some will differ for your deployment:

amqp:
  exchange: cse-exchange      <--- RabbitMQ exchange name
  host: rabbitmq.vcd.zpod.io  <--- RabbitMQ hostname
  password: <password>        <--- RabbitMQ user's password
  port: 5672                  <--- RabbitMQ port (default is 5672)
  prefix: vcd                 <--- default value, can be left as is
  routing_key: cse            <--- default value, can be left as is
  ssl: false                  <--- Set to "true" if using SSL for RabbitMQ connections
  ssl_accept_all: false       <--- Set to "true" if using SSL and utilizing self-signed certs
  username: cse-amqp          <--- RabbitMQ username (with access to the vhost)
  vhost: /                    <--- RabbitMQ virtual host that contains the exchange

The exchange defined in the file above will be created by the CSE server on install (if it doesn’t already exist). This exchange should NOT be the same one configured in the Extensibility section of the vCD Admin Portal. However, the Extensibility section of the vCD Admin Portal must be configured using the same virtual host (/ in my example above). See screenshot below for my an example of my vCD Extensibility config:

No manual config is required on the RabbitMQ server side aside from ensuring the RabbitMQ user (cse-amqp in the example above) has full access to the virtual host. See my previous post on Deploying vCloud Director for information on creating RabbitMQ users.

vcd section

As you might guess, this section of the config file contains information regarding the vCloud Director instance that CSE will communicate with via the API. Let’s have a look at the vcd config

vcd:
  api_version: '33.0'            <--- vCD API version
  host: director.vcd.pzod.io     <--- vCD Hostname
  log: true                      <--- Set to "true" to generate log files for CSE/vCD interactions
  password: my_secret_password   <--- vCD system admin's password
  port: 443                      <--- default value, can be left as is unless otherwise needed 
  username: administrator        <--- vCD system admin username
  verify: false                  <--- Set to "true" to verify SSL certificates

vcs section

In this section, we define the vCenter instances that are being managed by vCD. CSE needs access to the vCenter appliances in order to perform guest operation modifications, queries, and program execution. In my lab, my vCD deployment is managing 2 vCSA instances. You can add additional if required:

vcs:
- name: vc-pks                           <--- vCenter name as it appears in vCD
  password: <password>                   <--- administrator@vsphere.local's password
  username: administrator@vsphere.local  <--- vCenter admin's username
  verify: false                          <--- Set to "true" to verify SSL certificates
- name: vc-standard
  password: <password>
  username: administrator@vsphere.local
  verify: false

service section

The service section is small and really only has one config decision to make. If the enforce_authorization flag is set to false, ANY user that has permissions to create vApps in any Org in the vCD environment can provision Kubernetes clusters via CSE. If set to true, you can utilize RBAC functionality to grant specific Orgs and specific users within those Orgs rights to create clusters. When set to true, the enforce_authorization flag defaults to refusing any request to create Kubernetes clusters via CSE unless a user (and its org) has the proper rights assigned to allow the operation. For more information on configuring RBAC, see my previous blog post that walks through RBAC enablement scenarios (although the blog post was authored utilizing CSE 2.0, the constructs have not changed in 2.5.0).

service:
  enforce_authorization: true
  listeners: 5                  <--- number of threads CSE server can utilize
  log_wire: false               <--- if set to "true", will log all REST calls initiated by CSE to vCD

broker section

Here’s where all the magic happens!! The broker sections is where we define where and how the CSE server will deploy the first Kubernetes cluster that will serve as a basis for a vApp template that will be used for tenants’ Kubernetes cluster deployments.

  • The catalog value is the name CSE will use when creating a publicly shared catalog within my org for storing the vApp templates(s). The CSE server will create this catalog in vCD when I install the CSE server.

  • The default_template_name value is the template name that CSE will use by default when users deploy Kubernetes clusters via CSE without defining a specific template. Refer to the following link from the CSE documentation for available template names and revision numbers.

  • The default_template_revision value is a numerical value associated with the version of the template released by VMware. At the time of writing, all available templates are at revision 1.

  • The ip_allocation_mode value is the mode to be used during the install process to build the template. Possible values are dhcp or pool. During creation of clusters for tenants, pool IP allocation mode is always used.

  • The network value is an OrgVDC Network within the OrgVDC that will be used during the install process to build the template. It should have outbound access to the public internet in order to reach the template repository. The CSE server does not need to be connected to this network.

  • The org value is the organization that contains the shared catalog where the Kubernetes vApp templates will be stored.

  • The remote_template_cookbook_url value is the URL of the template repository where all template definitions and associated script files are hosted. This is new in CSE 2.5.0.

  • The storage_profile is the name of the storage profile to use when creating the temporary vApp used to build the Kubernetes cluster vApp template.

  • The vdc value is the virtual datacenter within the org (defined above) that will be used during the install process to build the vApp template.

Here is an example of the completed broker section:

broker:
  catalog: cse-25
  default_template_name: ubuntu-16.04_k8-1.15_weave-2.5.2
  default_template_revision: 1
  ip_allocation_mode: pool
  network: outside
  org: cse_25_test
  remote_template_cookbook_url: https://raw.githubusercontent.com/vmware/container-service-extension-templates/master/template.yaml
  storage_profile: '*'
  vdc: cse_vdc_1

template_rules section

This section is new in CSE 2.5.0 and is entirely optional. The template_rules section allows system admins to utilize vCD compute policies to limit which users have access to which Kubernetes templates. By default, any user that has access to create Kubernetes clusters via CSE also has access to all templates available. Use the template_rules section, along with compute policies, to limit which users have access to which Kubernetes templates.

pks_config section

This section points to a seperate .yaml config file that contains information about a VMware Enterprise PKS deployment if you are intended to utilize CSE Enterprise as well. Refer to [Part 3}(https://mannimal.blog/2019/11/22/container-service-extension-2-5-installation-part-3/) of my series for information on building the pks_config.yaml file.

Note: System admins can add CSE Enterprise capabilities via the pks_config flag at any point after CSE server installation, it does not have to be set on initial install.

pks_config: null  <--- Set to name of .yaml config file for CSE Enterprise cluster deployment

Now that I’ve gone over the config file, I am ready to proceed with my installation of the CSE server!!

CSE Server Installation and Validation

Before starting the install, we need to set the correct permissions on the config file:

chmod 600 config.yaml

After building out the config file, I’ll simple need to run the following command to install CSE in the environment. I’ll use the --skip-template-creation flag to ensure the configuration is sound and install the desired template in a subsequent command:

cse install -c config.yaml --skip-template-creation

Required Python version: >= 3.7.3
Installed Python version: 3.7.3 (default, Sep 16 2019, 12:54:43) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
Validating config file 'config.yaml'
Connected to AMQP server (rabbitmq.vcd.zpod.io:5672)
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised.
Connected to vCloud Director (director.vcd.zpod.io:443)
Connected to vCenter Server 'vc-standard' as 'administrator@vcd.zpod.io' (vcsa.vcd.zpod.io:443)
Connected to vCenter Server 'vc-pks' as 'administrator@pks.zpod.io' (vcsa.pks.zpod.io:443)
Config file 'config.yaml' is valid
Installing CSE on vCloud Director using config file 'config.yaml'
Connected to vCD as system administrator: director.vcd.zpod.io:443
Checking for AMQP exchange 'cse-exchange'
AMQP exchange 'cse-exchange' is ready
Updated cse API Extension in vCD
Right: CSE NATIVE DEPLOY RIGHT added to vCD
Right: CSE NATIVE DEPLOY RIGHT assigned to System organization.
Right: PKS DEPLOY RIGHT added to vCD
Right: PKS DEPLOY RIGHT assigned to System organization.
Created catalog 'cse-25'
Skipping creation of templates

Great!! I’ve installed the CSE Server. Now I’m ready to deploy a Kubernetes cluster vApp template into my cse-25 catalog. I can obtain a template name from the Template Announcement section of the CSE documentation. I can also use the following cse command from the CLI of the CSE server to query available templates:

$ cse template list -d remote

I can also define an ssh-key that will be injected into the VMs that are provisioned to act as the Kubernetes nodes with the --ssh-key flag. The system admin could then use the private ssh-key to access the OS of the Kubernetes nodes’ operating system via SSH. I’ll use the following cse command to install the Ubuntu Kubernetes template:

$ cse template install ubuntu-16.04_k8-1.15_weave-2.5.2 –-ssh-key id_rsa.pub

This command pulls down a Ubuntu OVA to the CSE server and then pushes it to the vCD environment, creates a set of VMs, and performs all required post provisioning customization to create a functioning Kubernetes cluster.

After the Kubernetes cluster is created, CSE creates a vApp template based on the cluster and then deletes the running cluster from the environment. This vApp template will then be used by CSE to create Kubernetes clusters when tenants use the vcd-cli to create clusters.

Now I’m finally ready to test our install with the cse run command, which will run the CSE service in the current bash shell:

$ cse run

---output omitted---

AMQP exchange 'vcd' exists
CSE on vCD is currently enabled
Found catalog 'cse-25'
CSE installation is valid
Started thread 'MessageConsumer-0 (140180650903296)'
Started thread 'MessageConsumer-1 (140180417672960)'
Started thread 'MessageConsumer-2 (140180634117888)'
Started thread 'MessageConsumer-3 (140180642510592)'
Started thread 'MessageConsumer-4 (140180409280256)'
Container Service Extension for vCloud Director
Server running using config file: config.yaml
Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log
waiting for requests (ctrl+c to close)

Awesome!! We can see the AMQP threads are created in the output and the server is running using my config file. Use ctrl+c to stop the service and return to the command prompt.

Controlling the CSE Service with systemctl

As you can see above, I can manually run the CSE Server with the cse run command, but it makes more sense to be able to automate the starting and stopping of the CSE service. To do that, I’ll create a systemd unit file and manage the CSE service via systemctl.

First, I’ll need to create a script that the systemd unit file will refer to in order to start the service. My virtual environment is located at /home/cse/cse-env and my CSE config file is located at /home/cse/config.yaml.

I’ll use vi to create the cse.sh file:

$ vi ~/cse.sh

And add the following text to the new file and save:

#!/usr/bin/env bash

source /home/cse/cse-env/bin/activate
cse run -c /home/cse/config.yaml

Now that I’ve created the start script, I need to create a unit file for systemd. I’ll access the root user on the CSE server:

$ su -

Now I’m ready to create the unit file. I’ll use vi to create the /etc/systemd/system/cse.service file:

# vi /etc/systemd/system/cse.service

And add the following text to the file:

[Service]
ExecStart=/bin/sh /home/cse/cse.sh
Type=simple
User=cse
WorkingDirectory=/home/cse
Restart=always
[Install]
WantedBy=multi-user.target

After adding the unit file, I’ll need to reload the systemctl daemon:

# systemctl daemon-reload

Now I’ll start the CSE service and enable it to ensure it starts automatically on boot:

# systemctl start cse
# systemctl enable cse

Finally, I’ll check the status of the service to ensure it is active and verify we see the messaging threads:

# service cse status
Redirecting to /bin/systemctl status cse.service
● cse.service
   Loaded: loaded (/etc/systemd/system/cse.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-10-10 17:00:50 EDT; 13s ago
 Main PID: 9621 (sh)
   CGroup: /system.slice/cse.service
           β”œβ”€9621 /bin/sh /home/cse/cse.sh
           └─9624 /home/cse/cse-ga/bin/python3.7 /home/cse/cse-ga/bin/cse run -c /home/cse/config.yaml

Oct 10 17:00:59 cse-25.vcd.zpod.io sh[9621]: CSE installation is valid
Oct 10 17:01:00 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-0 (139712918025984)'
Oct 10 17:01:00 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-1 (139712892847872)'
Oct 10 17:01:00 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-2 (139712901240576)'
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-3 (139712909633280)'
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Started thread 'MessageConsumer-4 (139712882005760)'
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Container Service Extension for vCloud Director
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Server running using config file: /home/cse/config.yaml
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: Log files: cse-logs/cse-server-info.log, cse-logs/cse-server-debug.log
Oct 10 17:01:01 cse-25.vcd.zpod.io sh[9621]: waiting for requests (ctrl+c to close)

Success!! Now I’m ready to start interacting with the CSE server with the CSE client via the vcd-cli tool.

Conclusion

In Part 1 of my series on CSE Installation, I detailed the steps required to install the CSE 2.5.0 bits within a Python 3.7.3 virtual environment. I also took a detailed look at the configuration file used to power the CSE Server before installing and running the server itself.

Join me in Part 2 of this series on the Container Service Extension where I’ll walk through configuring a tenant to allow provisioning of Kubernetes cluster via the CSE extension in vcd-cli!!

20 Replies to “Container Service Extension 2.5 Installation: Part 1”

  1. Hi Joe

    Thanks for this post.

    I have one question. When vCD tenant deploys a K8s cluster, how can he/she connect to the master or worker nodes as root login is disabled over ssh and his/her ssh key is not part of the templates?

    thanks

  2. Hi @Stephane! Thanks for the question!

    The desired workflow would be for the Org admin would enable specific tenants in their org with the rights required to provision clusters. Then, once those tenants have the required rights, they issue the “vcd cse cluster create” command with their own ssh key defined in the command to allow them to ssh to the nodes.

    This workflow is illustrated in Part 2 of the series (https://mannimal.blog/2019/10/10/container-service-extension-2-5-0-installation-part-2/). The –ssh-key flag is used during the template install in case the org admin needs to troubleshoot the template as it’s being deployed.

    Does that help answer your questions? Feel free to reply with any additional questions or comments, thank you!

    Joe

  3. Hi Mannimal,

    I got a bit consused with the step: cse template install ubuntu-16.04_k8-1.15_weave-2.5.2 –-ssh-key id_rsa.pub, can you explain or give some sample to control the ssh-key. Is the id_rsa.pub generated by me (admin) of cloud?

    1. Hi @Phuoc Tran, that is correct. For the template creation, you can specify an ssh key to use in case you need to ssh to the created virtual machines for troubleshooting.

  4. I tried to create my ssh-key and put them to template –> this is my understanding

    ssh-keygen
    cse template install ubuntu-16.04_k8-1.15_weave-2.5.2 –-ssh-key ~/.ssh/id_rsa.pub

    1. @Phuoc Tran Yep, that is correct!

      Now, if you’d like your tenants to be able to access the OS of their clusters, they can define their own public ssh key when they create their clusters, using the “–ssh-key” flag.

      Otherwise, all created clusters will have the “default” ssh public key (defined when you created the template) so you would need to assist in granting OS level access to the Kubernetes nodes if your tenants desire.

  5. Hi admin,

    I also encounter this problem while creating templates

    /home/cse/cse-env/lib/python3.7/site-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made to host ‘172.26.29.102’. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    InsecureRequestWarning,
    /home/cse/cse-env/lib/python3.7/site-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made to host ‘172.26.29.102’. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    InsecureRequestWarning,
    /home/cse/cse-env/lib/python3.7/site-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made to host ‘172.26.29.102’. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    InsecureRequestWarning,
    waiting for process 968 on vm ‘vim.VirtualMachine:vm-3724’ to finish (1)
    /home/cse/cse-env/lib/python3.7/site-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made to host ‘172.26.29.102’. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    InsecureRequestWarning,
    /home/cse/cse-env/lib/python3.7/site-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made to host ‘172.26.29.102’. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
    InsecureRequestWarning,
    process [140, , ] on vm ‘vim.VirtualMachine:vm-3724’ finished, exit code: 140
    Result: [140, , ]
    stderr:
    Created symlink /etc/systemd/system/iptables.service.wants/iptables-ports.service β†’ /etc/systemd/system/iptables-ports.service.
    curl#6: Couldn’t resolve host name
    Error: Failed to synchronize cache for repo ‘VMware Photon Linux 2.0(x86_64) Updates’ from ‘https://dl.bintray.com/vmware/photon_updates_2.0_x86_64’
    curl#6: Couldn’t resolve host name
    Error: Failed to synchronize cache for repo ‘VMware Photon Linux 2.0(x86_64)’ from ‘https://dl.bintray.com/vmware/photon_release_2.0_x86_64’
    curl#6: Couldn’t resolve host name
    Error: Failed to synchronize cache for repo ‘VMware Photon Extras 2.0(x86_64)’ from ‘https://dl.bintray.com/vmware/photon_extras_2.0_x86_64’
    curl#6: Couldn’t resolve host name
    Error: Failed to synchronize cache for repo ‘VMware Photon Linux 2.0(x86_64) Updates’ from ‘https://dl.bintray.com/vmware/photon_updates_2.0_x86_64’
    curl#6: Couldn’t resolve host name
    Error: Failed to synchronize cache for repo ‘VMware Photon Linux 2.0(x86_64)’ from ‘https://dl.bintray.com/vmware/photon_release_2.0_x86_64’
    curl#6: Couldn’t resolve host name
    Error: Failed to synchronize cache for repo ‘VMware Photon Extras 2.0(x86_64)’ from ‘https://dl.bintray.com/vmware/photon_extras_2.0_x86_64’
    Nothing to do.
    Error(908) : Command line error: option is invalid.

    stdout:
    Disabling Repo: ‘VMware Photon Linux 2.0(x86_64) Updates’
    Disabling Repo: ‘VMware Photon Linux 2.0(x86_64)’
    Disabling Repo: ‘VMware Photon Extras 2.0(x86_64)’
    Metadata cache created.
    upgrading the system
    Refreshing metadata for: ‘VMware Photon Linux 2.0(x86_64) Updates’
    Disabling Repo: ‘VMware Photon Linux 2.0(x86_64) Updates’
    Refreshing metadata for: ‘VMware Photon Linux 2.0(x86_64)’
    Disabling Repo: ‘VMware Photon Linux 2.0(x86_64)’
    Refreshing metadata for: ‘VMware Photon Extras 2.0(x86_64)’
    Disabling Repo: ‘VMware Photon Extras 2.0(x86_64)’
    No such option: –security. Please use /usr/bin/tdnf –help

    Failed VM customization. Please check logs.
    CSE Installation Error. Check CSE install logs
    Traceback (most recent call last):
    File “/home/cse/cse-env/bin/cse”, line 8, in
    sys.exit(cli())
    File “/home/cse/cse-env/lib/python3.7/site-packages/click/core.py”, line 764, in __call__
    return self.main(*args, **kwargs)
    File “/home/cse/cse-env/lib/python3.7/site-packages/click/core.py”, line 717, in main
    rv = self.invoke(ctx)
    File “/home/cse/cse-env/lib/python3.7/site-packages/click/core.py”, line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
    File “/home/cse/cse-env/lib/python3.7/site-packages/click/core.py”, line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
    File “/home/cse/cse-env/lib/python3.7/site-packages/click/core.py”, line 555, in invoke
    return callback(*args, **kwargs)
    File “/home/cse/cse-env/lib/python3.7/site-packages/click/decorators.py”, line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
    File “/home/cse/cse-env/lib/python3.7/site-packages/container_service_extension/server_cli.py”, line 319, in install
    msg_update_callback=ConsoleMessagePrinter())
    File “/home/cse/cse-env/lib/python3.7/site-packages/container_service_extension/configure_cse.py”, line 284, in install_cse
    msg_update_callback=msg_update_callback)
    File “/home/cse/cse-env/lib/python3.7/site-packages/container_service_extension/configure_cse.py”, line 602, in _install_template
    retain_temp_vapp=retain_temp_vapp)
    File “/home/cse/cse-env/lib/python3.7/site-packages/container_service_extension/template_builder.py”, line 449, in build
    self._customize_vm(vapp, TEMP_VAPP_VM_NAME)
    File “/home/cse/cse-env/lib/python3.7/site-packages/container_service_extension/template_builder.py”, line 347, in _customize_vm
    raise Exception(f”{msg}; Result: {result}”)
    Exception: Failed VM customization; Result: [140, , ]

    1. Glad to here it was fixed! As you mentioned, the VMs created during the template creation will need to be able to reach out to the internet to install packages required to support the Kubernetes cluster.

  6. Hi Mannimal,

    Thank you so much for your support. Right now, I’m facing another problem which may be related with python installation. The command: vcd cse cluster list was getting failure and I don’t know how to fix it. Although I tried follow the instruction pip3 uninstall pyvcloud vcd-cli -y, pip3 install pyvcloud==21.0.0 vcd-cli==22.0.0 –upgrade, the problem was still existed. Hope can be gotten your help again.

    Note: I installed the python according to your guide: https://mannimal.blog/2019/07/04/creating-a-virtualenv-with-python-3-7-3/

    (cse-env) [cse@console01 ~]$ vcd cse system info
    property value
    description Container Service Extension for VMware vCloud Director
    product CSE
    version 2.5.1
    (cse-env) [cse@console01 ~]$
    (cse-env) [cse@console01 ~]$ vcd cse cluster list
    Usage: vcd cse cluster list [OPTIONS]
    Try “vcd cse cluster list -h” for help.

    Error: ‘Client’ object has no attribute ‘_uri’
    (cse-env) [cse@console01 ~]$
    (cse-env) [cse@console01 ~]$ python -V
    Python 3.7.3
    (cse-env) [cse@console01 ~]$ pip -V
    pip 20.0.2 from /home/cse/cse-env/lib/python3.7/site-packages/pip (python 3.7)
    (cse-env) [cse@console01 ~]$

  7. I tried installing CSE 2.6 in my lab, and cse installed failed in first step. I created config.yaml file as per specification provided in your blog.

    Ran below command:

    [root@cse-server ~]# cse install -c encrypted-config.yaml –skip-template-creation
    Required Python version: >= 3.7.3
    Installed Python version: 3.7.4 (default, May 2 2020, 09:44:00)
    [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
    Password for config file decryption:
    Decrypting ‘encrypted-config.yaml’
    Validating config file ‘encrypted-config.yaml’
    Connected to AMQP server (rmq01.mgmt.vmw:5672)
    Status code: 404/None, None (request id: None)

    Here is how my amqp section looks like:

    amqp:
    exchange: cse-exchange
    host: rmq01.mgmt.vmw
    password: VMware1!
    port: 5672
    prefix: vcd
    routing_key: cse
    ssl: false
    ssl_accept_all: false
    username: cse-amqp
    vhost: /

    Note: I have created cse-amqp user in RMQ but did not created any exchange. User has full permission on vhost /

    Log file is blank:

    [root@cse-server cse-logs]# cat cse-install_2020-05-02_22-42-10.log

    Can you help me in diagnosing what exactly is the issue

    1. Hi Manish,

      It looks like your RabbitMQ section is configured correctly (as evidence by the Connected to AMQP server (rmq01.mgmt.vmw:5672) message in the command output) so it could be failing and giving a 404 when trying to connect to VCD.

      What version of VCD are you using? Can you ping VCD by hostname from the CSE server?

  8. HI Mannimal,

    In “Controlling the CSE Service with systemctl” if i have a password for execute “encrypted-config.yaml” file, which path i should to put my password or unnecessary?
    P.S. Sorry for my English.^^

  9. After I run this command:
    cse install -c encrypted-config.yaml –ssh-key /root/.ssh/id_rsa.pub

    I get this error below:
    Creating vApp ‘ubuntu-16.04_k8-1.17_weave-2.6.0_temp’
    CSE Installation Error. Check CSE install logs

    Any suggestions on how to resolve this?

Leave a Reply

Your email address will not be published. Required fields are marked *