Building a multi-container application with vSphere Integrated Containers v1.1.1

Running demo containers on vSphere Integrated Containers (aka VIC) is pretty exciting but eventually the time will come to get our hands dirty and build an actual application that needs to live on a virtual container host (aka VCH) in a resilient way. So in this post,  I’ll try to cover the steps in order to build an application with two layers (application/ui layer and a database layer) and how to workaround the challenges that VIC engine imposes. High level objectives are;

  • Select a proper application
  • Open the required ports on ESXi servers
  • Create and configure a VCH
  • Create docker-compose configuration file
  • Run the containers and test the application

The component versions that I use for this task is;

  • vSphere Integrated Containers: 1.1.1, build 56a309f
  • docker-compose: 1.13.0, build 1719ceb

Selection of a proper application:

The application that I’m going to deploy is an open source project on GitHub, called Gogs. Gogs is defined by the creators as a painless, self-hosted Git service. The goal of the project is to make the easiest, fastest, and most painless way of setting up a self-hosted Git service, written with Go. So this enables an independent binary distribution across all platforms that Go supports, including Linux, Mac OS X, Windows and even ARM.

There are many ways to install Gogs, such as installation from the source code, packages or with Vagrant but of course we will focus on installation as a Docker container. Normally, one container is more than enough to run Gogs but it also supports remote databases so we will take advantage of that to create a two-layered application.

Required ports on ESXi servers:

Before we start, I assume that VIC appliance is deployed and configured properly. Also we need a vic-machine in order to run vic commands and this is going to be a CentOS box in my environment. If we still don’t have the vic-machine binaries, we can easily get them with curl from the vic appliance. (sddcvic is my appliance and if using self-signed certificates, insert –insecure at the end of the curl command)

curl -L -O https://sddcvic:9443/vic_1.1.1.tar.gz /root --insecure
gzip -d /root/vic_1.1.1.tar.gz
tar -xvf vic_1.1.1.tar

ESXi hosts communicate with the VCHs through port 2377 via Serial Over LAN. For the deployment of a VCH to succeed, port 2377 must be open for outgoing connections on all ESXi hosts before you run vic-machine-xxx create to deploy a VCH. The vic-machine utility includes an update firewall command, that we can use to modify the firewall on a standalone ESXi host or all of the ESXi hosts in a cluster. This command will allow tcp/2377 outgoing connctions for all ESXi servers that exist under the cluster defined with compute-resource option.

./vic-machine-linux update firewall \
    --target=sddcvcs.domain.sddc \
    --user=orcunuso \
    --compute-resource=SDDC.pCluster \
    --thumbprint="3F:6E:2F:16:FA:76:53:74:18:3F:26:9D:1A:58:40:AD:E5:D8:3E:52" \
    --allow

Initially, we may not know the thumbprint of the vcenter server. The trick here is to run the command without thumbprint option, get it from the error message, add the option and re-run the command.

Deployment of the VCH:

Normally, a few options (such as name, target, user and tls support mode) will suffice. But in order to deploy a more customized VCH, there are many options that we can provide (here is the full list). Below is the command that I used to deploy mine.

./vic-machine-linux create \
    --name=vch02.domain.sddc \
    --target=sddcvcs.domain.sddc/SDDC.Datacenter \
    --thumbprint="3F:6E:2F:16:FA:76:53:74:18:3F:26:9D:1A:58:40:AD:E5:D8:3E:52" \
    --user=orcunuso \
    --compute-resource=SDDC.Container \
    --image-store=VMFS01/VCHPOOL/vch02 \
    --volume-store=VMFS01/VCHPOOL/vch02:default \
    --bridge-network=LSW10_Bridge \
    --public-network=LSW10_Mgmt \
    --client-network=LSW10_Mgmt \
    --management-network=LSW10_Mgmt \
    --dns-server=10.10.100.10 \
    --public-network-ip=10.10.100.22/24 \
    --public-network-gateway=10.10.100.1 \
    --registry-ca=/root/ssl/sddcvic.crt \
    --no-tls

The option –no-tls disables TLS authentication of connections between the docker clients and the VCH, so VCH use neither client nor server certificates. In this case, docker clients connect to the VCH via port 2375, instead of port 2376.

Disabling TLS authentication is not the recommended way of deploying VCH because thus, any docker client can connect to this VCH in an insecure manner. But in this practice, we will use docker-compose to build our application, and I’ve encountered many issues with TLS enabled VCH. It’s on my to-do list.

Creating docker-compose.yml configuration file:

Docker-compose is a tool for defining and running multi-container docker applications. With compose, we use a YAML file to configure our application’s services. Then, using a single command, we can create and start all the services from our configuration. First we need to provide the binary to run docker-compose if it is not in place. Simply run curl command to download from GitHub.

curl -L https://github.com/docker/compose/releases/download/1.13.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Now we need to create our configuration file (docker-compose.yml).

version: "3"
services:
  gogsapp:
    image: sddcvic.domain.sddc/gogs/gogs
    container_name: gogsapp
    restart: on-failure
    depends_on:
      - "gogsdb"
    volumes:
      - "gogsvol2:/data"
    ports:
      - "10080:3000"
      - "10022:22"
    networks:
      - "gogsnet"
  gogsdb:
    image: sddcvic.domain.sddc/library/postgres:9.6
    container_name: gogsdb
    environment:
      - POSTGRES_PASSWORD=gogsdbpassword
      - POSTGRES_USER=gogsuser
      - POSTGRES_DB=gogsdb
      - PGDATA=/var/lib/postgresql/data/data
    restart: on-failure
    volumes:
      - "gogsvol1:/var/lib/postgresql/data"
    networks:
      - "gogsnet"
networks:
  gogsnet:
    driver: bridge
volumes:
  gogsvol1:
    driver: vsphere
  gogsvol2:
    driver: vsphere

From the architectural point of view, we have two services that communicate via a bridge network that we call “gogsnet”. Gogs application requires two ports to be exposed to the real world, which are 22 and 3000. Postgres container accepts connections via port 5432 but that port does not have to be exposed because it is an internal communication.

To make this application more reliable, it is a good practice to use persistent volumes so whenever we recreate the containers from the images that are already pushed to the private registry, the data will not be lost and the application will resume as expected. For the sake of this purpose, we create two volumes, gogsvol1 and gogsvol2, and map these volumes to relevent directories. The default volume driver with VIC is vsphere so what is going to happen is that VIC will create two VMDK files in the locations that we provided with the –volume-store option during the VCH deployment phase and attach those VMDKs to the containers.

Normally, we are not supposed to specify an alternate location for postgres database files but in this case, VIC uses VMDK for the volumes and as it is a new volume, it will have a lost+found folder which causes postgres init scripts to quit with exit code 1, so the container exits as well. That is the reason why we use PGDATA environment variable and specify a subdirectory to contain the data.

At the end of the day, this is how it will look like:

Run and test the application:

Before running the app, let’s make sure that everything is in place. Our check list includes;

  • A functional registry server
  • Required images that are ready to be pulled from registry server (gogs and postgres)
  • A functional VCH with no-tls support
  • Docker-compose and yaml file

Now let’s build our application

docker-compose -f /root/gogs/docker-compose-vic.yml up -d

Excellent!!! Our containers are up and running. Let’s connect to our application via port tcp/10080 and make the initial configurations that needs to be done for the first-time run. We give the options that we specified as environment variables during the build process of postgres container.

And voila!!! Our two-layered, containerized, self-hosted git service is up and running on virtual container host backed by vSphere Integrated Containers registry service (aka Harbor). Enjoy 🙂