Building a multi-container application with vSphere Integrated Containers v1.1.1

Running demo containers on vSphere Integrated Containers (aka VIC) is pretty exciting but eventually the time will come to get our hands dirty and build an actual application that needs to live on a virtual container host (aka VCH) in a resilient way. So in this post,  I’ll try to cover the steps in order to build an application with two layers (application/ui layer and a database layer) and how to workaround the challenges that VIC engine imposes. High level objectives are;

  • Select a proper application
  • Open the required ports on ESXi servers
  • Create and configure a VCH
  • Create docker-compose configuration file
  • Run the containers and test the application

The component versions that I use for this task is;

  • vSphere Integrated Containers: 1.1.1, build 56a309f
  • docker-compose: 1.13.0, build 1719ceb

Selection of a proper application:

The application that I’m going to deploy is an open source project on GitHub, called Gogs. Gogs is defined by the creators as a painless, self-hosted Git service. The goal of the project is to make the easiest, fastest, and most painless way of setting up a self-hosted Git service, written with Go. So this enables an independent binary distribution across all platforms that Go supports, including Linux, Mac OS X, Windows and even ARM.

There are many ways to install Gogs, such as installation from the source code, packages or with Vagrant but of course we will focus on installation as a Docker container. Normally, one container is more than enough to run Gogs but it also supports remote databases so we will take advantage of that to create a two-layered application.

Required ports on ESXi servers:

Before we start, I assume that VIC appliance is deployed and configured properly. Also we need a vic-machine in order to run vic commands and this is going to be a CentOS box in my environment. If we still don’t have the vic-machine binaries, we can easily get them with curl from the vic appliance. (sddcvic is my appliance and if using self-signed certificates, insert –insecure at the end of the curl command)

curl -L -O https://sddcvic:9443/vic_1.1.1.tar.gz /root --insecure
gzip -d /root/vic_1.1.1.tar.gz
tar -xvf vic_1.1.1.tar

ESXi hosts communicate with the VCHs through port 2377 via Serial Over LAN. For the deployment of a VCH to succeed, port 2377 must be open for outgoing connections on all ESXi hosts before you run vic-machine-xxx create to deploy a VCH. The vic-machine utility includes an update firewall command, that we can use to modify the firewall on a standalone ESXi host or all of the ESXi hosts in a cluster. This command will allow tcp/2377 outgoing connctions for all ESXi servers that exist under the cluster defined with compute-resource option.

./vic-machine-linux update firewall \
    --target=sddcvcs.domain.sddc \
    --user=orcunuso \
    --compute-resource=SDDC.pCluster \
    --thumbprint="3F:6E:2F:16:FA:76:53:74:18:3F:26:9D:1A:58:40:AD:E5:D8:3E:52" \
    --allow

Initially, we may not know the thumbprint of the vcenter server. The trick here is to run the command without thumbprint option, get it from the error message, add the option and re-run the command.

Deployment of the VCH:

Normally, a few options (such as name, target, user and tls support mode) will suffice. But in order to deploy a more customized VCH, there are many options that we can provide (here is the full list). Below is the command that I used to deploy mine.

./vic-machine-linux create \
    --name=vch02.domain.sddc \
    --target=sddcvcs.domain.sddc/SDDC.Datacenter \
    --thumbprint="3F:6E:2F:16:FA:76:53:74:18:3F:26:9D:1A:58:40:AD:E5:D8:3E:52" \
    --user=orcunuso \
    --compute-resource=SDDC.Container \
    --image-store=VMFS01/VCHPOOL/vch02 \
    --volume-store=VMFS01/VCHPOOL/vch02:default \
    --bridge-network=LSW10_Bridge \
    --public-network=LSW10_Mgmt \
    --client-network=LSW10_Mgmt \
    --management-network=LSW10_Mgmt \
    --dns-server=10.10.100.10 \
    --public-network-ip=10.10.100.22/24 \
    --public-network-gateway=10.10.100.1 \
    --registry-ca=/root/ssl/sddcvic.crt \
    --no-tls

The option –no-tls disables TLS authentication of connections between the docker clients and the VCH, so VCH use neither client nor server certificates. In this case, docker clients connect to the VCH via port 2375, instead of port 2376.

Disabling TLS authentication is not the recommended way of deploying VCH because thus, any docker client can connect to this VCH in an insecure manner. But in this practice, we will use docker-compose to build our application, and I’ve encountered many issues with TLS enabled VCH. It’s on my to-do list.

Creating docker-compose.yml configuration file:

Docker-compose is a tool for defining and running multi-container docker applications. With compose, we use a YAML file to configure our application’s services. Then, using a single command, we can create and start all the services from our configuration. First we need to provide the binary to run docker-compose if it is not in place. Simply run curl command to download from GitHub.

curl -L https://github.com/docker/compose/releases/download/1.13.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Now we need to create our configuration file (docker-compose.yml).

version: "3"
services:
  gogsapp:
    image: sddcvic.domain.sddc/gogs/gogs
    container_name: gogsapp
    restart: on-failure
    depends_on:
      - "gogsdb"
    volumes:
      - "gogsvol2:/data"
    ports:
      - "10080:3000"
      - "10022:22"
    networks:
      - "gogsnet"
  gogsdb:
    image: sddcvic.domain.sddc/library/postgres:9.6
    container_name: gogsdb
    environment:
      - POSTGRES_PASSWORD=gogsdbpassword
      - POSTGRES_USER=gogsuser
      - POSTGRES_DB=gogsdb
      - PGDATA=/var/lib/postgresql/data/data
    restart: on-failure
    volumes:
      - "gogsvol1:/var/lib/postgresql/data"
    networks:
      - "gogsnet"
networks:
  gogsnet:
    driver: bridge
volumes:
  gogsvol1:
    driver: vsphere
  gogsvol2:
    driver: vsphere

From the architectural point of view, we have two services that communicate via a bridge network that we call “gogsnet”. Gogs application requires two ports to be exposed to the real world, which are 22 and 3000. Postgres container accepts connections via port 5432 but that port does not have to be exposed because it is an internal communication.

To make this application more reliable, it is a good practice to use persistent volumes so whenever we recreate the containers from the images that are already pushed to the private registry, the data will not be lost and the application will resume as expected. For the sake of this purpose, we create two volumes, gogsvol1 and gogsvol2, and map these volumes to relevent directories. The default volume driver with VIC is vsphere so what is going to happen is that VIC will create two VMDK files in the locations that we provided with the –volume-store option during the VCH deployment phase and attach those VMDKs to the containers.

Normally, we are not supposed to specify an alternate location for postgres database files but in this case, VIC uses VMDK for the volumes and as it is a new volume, it will have a lost+found folder which causes postgres init scripts to quit with exit code 1, so the container exits as well. That is the reason why we use PGDATA environment variable and specify a subdirectory to contain the data.

At the end of the day, this is how it will look like:

Run and test the application:

Before running the app, let’s make sure that everything is in place. Our check list includes;

  • A functional registry server
  • Required images that are ready to be pulled from registry server (gogs and postgres)
  • A functional VCH with no-tls support
  • Docker-compose and yaml file

Now let’s build our application

docker-compose -f /root/gogs/docker-compose-vic.yml up -d

Excellent!!! Our containers are up and running. Let’s connect to our application via port tcp/10080 and make the initial configurations that needs to be done for the first-time run. We give the options that we specified as environment variables during the build process of postgres container.

And voila!!! Our two-layered, containerized, self-hosted git service is up and running on virtual container host backed by vSphere Integrated Containers registry service (aka Harbor). Enjoy 🙂

Push windows container images to harbor registry

I’ve been pulling and pushing container images to harbor registry (that comes with vSphere Integrated Containers 1.1 as told here) for a while without any problem. Last week, I decided to play with native windows containers on Windows 2016 but failed to push the image that I built from a microsoft/nanoserver image.

[As of today, the official version that can be downloaded from My VMware Portal is v1.1.1 and this has been tested with harbor registry version 1.1.0 and 1.1.1. Please note that this might not be supported by VMware]

Here is what it looks like from the client when I tried to push the container image to the registry. All layers get pushed but when the client pushes the manifest, it fails.

manifest blob unknown: blob unknown to registry

What happens under the hood is, whenever a container image is being pushed to the registry, an image manifest is also uploaded that provides a configuration and a set of layers for the container image. The format of the manifest must be recognized by the container registry, if not, the registry returns an 404 error code and errors out the push operation.

As it can be clearly seen from the first screenshot, the initial two layers of microsoft nanoserver container image are defined as foreign layers that require a new image manifest schema (image manifest version 2, schema 2). There is one container running in the vic appliance that is responsible for the registry service, which is vmware/registry:photon-2.6.0 and in order to remediate this issue, we need to swap the registry container version to 2.6.1 which recognizes and accepts this new schema. To accomplish the task;


docker pull vmware/registry:2.6.1-photon
docker-compose -f docker-compose.notary.yml -f docker-compose.yml down
sed -i -- 's/photon-2.6.0/2.6.1-photon/g' /etc/vmware/harbor/docker-compose.yml
docker-compose -f docker-compose.notary.yml -f docker-compose.yml up -d

  1. First, let’s pull 2.6.1 tagged version of vmware/registry repository from docker hub.
  2. Stop all running containers gracefully.
  3. Modify the yml file in order to swap the container image to run.
  4. Start all running containers.

After we spin up the containers, we can verify that registry container is swapped with the newer one, with docker ps command. So it’s time to push our windows based container image to harbor again… and voila!

P.S. Foreign layers are still skipped and not pushed to the harbor registry because of their nature. This is another topic that I will touch on and propose a workaround to push them to our private registry, in the next post.

vSphere Integrated Containers 1.1

Many has changed since my last blog post about vSphere Integrated Containers. So I though it would be a good time to make a fresh start.

On April 2017, VMware released vSphere Integrated Containers v1.1 (aka VIC). VIC comprises of three components that can be installed with an unified OVA package:

  • VMware vSphere Integrated Containers Engine (aka VIC Engine), a container runtime for vSphere that allows developers who are familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters. vSphere adminitrators can manage these workloads by using vSphere in a way that is familiar.
  • VMware vSphere Integrated Containers Registry (aka Harbor), an enterprise-class container registry server that stores and distributes container images. vSphere Integrated Containers Registry extends the Docker Distribution open source project by adding the functionalities that an enterprise requires, such as security, identity and management.
  • VMware vSphere Integrated Containers Management Portal (aka Admiral), a container management portal that provides a UI for DevOps teams to provision and manage containers, including retrieving stats and info about container instances. Cloud administrators can manage container hosts and apply governance to their usage, including capacity quotas and approval workflows. When integrated with vRealize Automation, more advanced capabilities become available, such as deployment blueprints and enterprise-grade Containers-as-a-Service.

These are the key highlights with this new version, but when you go deeper, you figure out that there are many improvements under the hood which I plan to mention in the blog posts to come. So the key new features are:

  • A unified OVA installer for all three components
  • Official support for vSphere Integrated Containers Management Portal
  • A unified UI for vSphere Integrated Containers Registry and vSphere Integrated Containers Management Portal
  • A plug-in for the HTML5 vSphere Client
  • Support for Docker Client 1.13 and Docker API version 1.25
  • Support for using Notary with vSphere Integrated Containers Registry
  • Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers.

The installation process is as easy as deploying an OVA appliance so I do not tend to go deeper with the installation as I already touched briefly on my previous post, it’s pretty much the same and obvious. All the components installed after the deployment are running as containers just like the older version. The difference is, there are more containers running as more services are bundled in the appliance. To list running containers;

docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Command}}"

The applications running in the appliance are multi-container applications so VMware leveraged docker-compose in order to define and run those containers. Under /etc/vmware/harbor there are two yml files.

  • docker-compose.yml: Composes harbor components
  • docker-compose.notary.yml: Composes docker notary components

You can start and stop harbor and notary by running docker-compose up and docker-compose down commands.

cd /etc/vmware/harbor
docker-compose -f docker-compose.notary.yml -f docker-compose.yml down
docker-compose -f docker-compose.notary.yml -f docker-compose.yml up -d

The only container that is not managed by docker-compose is vic-admiral. This is a standalone container and can be started and stopped by standard docker commands individually.