Harbor, an enterprise class registry server

newharbor0

With the GA release of vSphere Integrated Containers (aka VIC) on December, VMware also announced an enterprise class registry server based on the open source Docker Distribution that allows us to store and distribute container images in our datacenter. While VIC Engine (actually the core component of VIC) is great for providing a container runtime for vSphere, enterprise customers governed by strict regulations require a more private solution in order to store and manage container images rather than the default cloud-based registry service from Docker. Docker has already a private solution, which is called Trusted Registry, but the lack of enterprise grade functionalities such as increased control and security, identity and management triggered this open source project, Project Harbor.

Project Harbor extends Docker Trusted Registry and adds the following features,

  • Role based access control: Users and repositories are well organized and a user can have different permissions for images under a project.
  • Policy based image replication: Images can be replicated (synchronized) between multiple registry instances.
  • LDAP/AD support: Harbor integrates with existing enterprise LDAP/AD for user authentication and management.
  • Image deletion & garbage collection: Images can be deleted and their space can be recycled.
  • Graphical user portal: Users can easily browse, search repositories and manage projects.
  • Auditing: All the operations to the repositories are tracked.
  • RESTful API: RESTful APIs for most administrative operations, easy to integrate with external systems.
  • Easy deployment: Provide both an online and offline installer. Besides, a virtual appliance for vSphere platform (OVA) is available.

It’s possible to download the binary from My VMware Portal which is required to install Harbor as a virtual appliance. If we are not eligible to reach the binary through the portal, we can always visit the GitHub page of the project for manual installation and the OVA file.

Let’s assume that we have decided to use the OVA format in the name of the ease of the deployment. As there are many options we need to provide, such as regular virtual appliance options (hostname, datastore, IP configuration) and application specific configurations (passwords, SMTP configuration and a few harbor specific options), the most important one is the one that configures the authentication method that Harbor will be configured with. It can be either LDAP authentication or database authentication but it cannot be modified after the installation, if required, we have to install a fresh instance. LDAP authentication is a good practice because it will save us from managing custom users within the database and will be more secure. Below are the options that we need to provide (please modify according to your domain)

  • Authentication mode: ldap_auth
  • LDAP URL: ldap://DomainController.demo.local
  • LDAP Search DN: CN=User2MakeQueries,OU=Users,DC=demo,DC=local
  • LDAP Search Password: Password of the above user
  • LDAP Base DN: OU=OrganizationUnitInWhichUsersWillBeQueried,DC=demo,DC=local
  • LDAP UID: The filter to query the users, such as uid, cn, email, sAMAccountName or any other attribute.

After the installation, it’s highly recommended to change the admin password of Harbor provided by us during the deployment phase because it will persist in the configuration file (/harbor/harbor/harbor.cfg) in plain text.

It will take a few minutes to complete the installation. If we set the “Permit Root Login” option to ‘true’ during the deployment phase, we can connect to the server via SSH with root credentials and begin to play around. The deployed operating system is Photon OS and the sub-components of Harbor are actually running as containers. When we run docker ps -a, all those running containers come to daylight. Harbor consists of six containers composed by docker-compose.

  • Proxy: This is a NGINX reverse-proxy. The proxy forwards requests from browsers and Docker clients to backend services such as the registry service and core services.
  • Registry: This registry service is based on Docker Registry 2.5 and is responsible for storing Docker images and processing Docker push/pull commands.
  • Core Services: Harbor’s core functions, which mainly provides the following services:
    • UI: A graphical user interface to help users manage images on the Registry
    • Webhook: Webhook is a mechanism configured in the Registry so that image status changes in the Registry can be populated to the Webhook endpoint of Harbor. Harbor uses webhook to update logs, initiate replications, and some other functions.
    • Token service: Responsible for issuing a token for every docker push/pull command according to a user’s role of a project. If there is no token in a request sent from a Docker client, the Registry will redirect the request to the token service.
  • Database: Derived from official MySQL image and is responsible for storing the metadata of projects, users, roles, replication policies and images.
  • Job Services: This service is responsible for image replication to other Harbor instances (if there are any).
  • Log Collector: This service is responsible for collecting logs of other modules in a single place.

And this is what it looks like from an architectural point of view;

All the blue boxes shown in the diagram are running as containers. If we would like to know more about those containers and how they are configured, we can always run docker inspect commands on them.


docker inspect nginx
docker inspect harbor-jobservice
docker inspect harbor-db
docker inspect registry
docker inspect harbor-ui
docker inspect harbor-log

As a result of the inspect commands, the thing that attracts my attention is the persistent volume configurations. By their nature, containers are immutable and disposable. So in order to keep the data and the service configurations persistent, Harbor takes advantage of volume mounts between the docker host and the containers. And it’s also useful to modify configuration of the services and to replace the ui certificate.

This is the list of the volume mounts (sources and destinations) used in all containers.

We can now enjoy and push images to our on-prem brand new registry server.

Introduction to PhotonOS

photonos-1

Since the cloud native landscape has been greatly embraced by the developers and the open source community, we witness an increasing momentum on container runtimes. The focus center is shifting from virtual machines to containers and running microservices in minimalistic operating systems is becoming mainstream (ok, not that fast but we will definitely be there). So the increasing popularity of minimal operating systems such as CoreOS, RancherOS, RedHat Atomic or even Windows Nano Server has forced VMware to build a lightweight operating system that is optimized to run containers, in order to support it’s cloud native strategy.

VMware introduced Photon OS as an open source project on April 2015 and released the first general available version (v 1.0) on June 2016. Briefly, Photon OS is a minimal linux container host designed to have a small footprint and optimized for VMware platforms. With the 1.0 release, the library of packages within Photos OS had been greatly expanded, making Photon OS more broadly applicable to a range of use-cases while keeping both the disk and memory footprints extremely small [kernel boot times are around 200ms with a 384MB memory footprint and 396MB on disk (minimal installation)].

Photos OS is compatible with container runtimes, such as Docker, rkt and Garden (Pivotal), and container scheduling frameworks, like Kubernetes. It contains a new, open source, yum compatible package manager (TDNF, Tiny Dandified Yum) that makes the system as small as possible, but preserves robust yum package management capabilities.

P.S. In conjunction with all the lightweight nature of Photon OS, we already start to witness some other use-cases rather than just running containers (such as virtual appliances owned by VMware). vCenter Server Appliance 6.5 is running on Photon OS and I expect that more virtual appliances will follow vCSA in the near future.

Now, let’s see how we can spin up a Photon OS based virtual machine. First thing we need to do is to download the binaries from GitHub. Photon OS is available in a few different pre-packaged, binary formats but two of them makes more sense to us:

  • ISO Image: The full ISO contains everything we need to install either the minimal or full installation of Photon OS. The bootable ISO has a manual installer or can be used with PXE/kickstart environments for automated installations.
  • OVA format: The OVA is a pre-installed minimal environment. OVA is a complete virtual machine definition, that’s why it exists in two different versions, hardware version 10 and 11. This will allow for compatibility with several versions of VMware platforms.

As deploying Photon OS via OVA format is a valid and an easy option, I will prefer the ISO method to go through with the steps already less in number. Now as step 2 (step 1 was to get the binaries), we need to create a fresh virtual machine. These are the values that I used:

  • Disk space: 8 GB
  • Compatibility: Hardware Level 11 (ESXi 6.0)
  • Guest OS version: Other 3.x or later Linux (64-bit)
  • vCPU: 1
  • vRAM: 512 MB

Now, mount the ISO to the CD-ROM drive, make sure that “Connect at power on” checkbox is selected, and power the VM on. The installation will be so straight-forward, we will pass through the welcome page, licensing agreement and disk formating sections. Probably the most important selection is the Photon OS type that we want to install.

photonos-2

Each install option provides a different runtime environment:

  • Photon Minimal: It is a very lightweight version of the container host runtime that is best suited for container management and hosting. There is sufficient packaging and functionality to allow most common operations around modifying existing containers, as well as being a highly performant and full-featured runtime.
  • Photon Full: Photon Full includes several additional packages to enhance the authoring and packaging of containerized applications and system customization. It’s better to use Photon Full for developing and packaging the application that will be run as a container, as well as authoring the container itself.
  • Photon OSTree Host: This installation profile creates a Photon OS instance that will source its packages from a central rpm-ostree server and continue to have the library and state of packages managed by the definition that is maintained on the central rpm-ostree server.
  • Photon OSTree Server: This installation profile will create the server instance that will host the filesystem tree and managed definitions for rpm-ostree managed hosts created with the Photon OSTree Host installation profile.

After installation type selection, we will be prompted for a hostname. Installation will come up with a randomly generated hostname, we can enter our own hostname right now or we can always modifiy after the installation with hostnamectl command. Lastly, we enter the root password and the installation starts. In my lab environment, it took 157 seconds to complete the full installation.

photonos-3

Voila, we have an up and running Photos OS server. Photon OS is pre-configured to set the ip address dynamically but if we don’t have a DHCP server in our environment, we have to configure it manually.

Network Configuration:

The network service, which is enabled by default, starts when the system boots. We manage the network service by using systemd commands, such as systemd-networkd, systemd-resolvd, and networkctl. The network configurations are based on .network files that are present at /etc/systemd/network/ and /usr/lib/systemd/network folder. By default, when Photon OS starts, it creates a DHCP network configuration file but we are free to add our own configuration files. Photon OS applies the configuration files in the alphabetical order specified by the file names. Once Photon OS matches an interface in a file, it ignores the interface if it appears in files processed later in the alphabetical order. So, to set a static IP address, we

  • create a configuration file with .network extension
  • place it in the /etc/systemd/network directory
  • set the file’s mode bits to 644
  • restart the systemd-networkd service

photonos-4

Please refer to this guide for further instructions on systemd.network service.

SSH Configuration:

The default iptables policy accepts SSH connections but the sshd configuration file on the full version of Photon OS is set to reject root login over SSH. To permit root login over SSH, we need to open /etc/ssh/sshd_config with the vim text editor, set PermitRootLogin to yes and restart the SSH daemon.

photonos-6

Enable Docker:

Among all these configuration steps, I have almost forgotten why we are doing this, so the ultimate objective is to run containers, right? Full version of Photon OS includes the open source version of Docker (v.1.12.1) but it is not started by default. So it requires us to start the daemon and enable it.

  • Start docker: systemctl start docker
  • Enable docker: systemctl enable docker

Now, we are ready to run docker commands and spin up some containers.

photonos-7