Tutorial: Poor Man’s Release/Shipping Process on GitLab

Image result for gitlab

Since the recent acquisition of GitHub by Microsoft, a lot of people and companies are migrating to GitLab. The GitLab offering is especially attractive to early-stage start-ups since they offer free 10000 CI minutes or, in other words, 10000 minutes on their CI servers. It is easy to move over the workload to your own Kubernetes cluster once you run out of them – there is an easy-to-use user interface available where you could do that with just a few clicks.

Because of that, some kind of way is needed by companies to implement the release process on GitLab. Obviously, there are a lot of great solutions out there. However, let me present the process that I thought of. Besides the primary requirements of all releases processes, this one also takes minimal resources in terms of money and time because it is implemented directly using the GitLab CI functionality.

Are you interested? Let’s begin.

The overarching idea of this release process is to use the GitLab’s embedded Docker registry and the tagging functionality. Differ

Image result for containers

ent branches are going to be used for different stages of the release process – development images, unit under test (UAT) images, and the final images. Those different branches will also contain information about what kind of binaries to pull into the resulting Docker image. After a certain number of iterations, the development image is promoted to the UAT image. After some testing by, for example, QA engineers and housekeeping (e.g. some documentation needs to be updated or clients informed), they will finally be released into the wild by retagging the image with the final version tag.

Let’s drill down into the finer details.

All GitLab repositories at this moment have a free, embedded Docker registry enabled. I recommend using a separate repository for this release process – maybe you do not want to always release a new development image once new code gets pushed. Also, you might accidentally one day conflate the branches used for the release process and the development. Obviously, you could “protect” them but still, I don’t think it is worth it.

First of all, you need a Dockerfile that will be used to build the image. This guide will not talk about it, it’s up to you to figure it out. My only recommendation is to think about it as if that image is going to be released to your users i.e. it has to have everything.
Afterward, you should create a .gitlab-ci.yml file in the root of your repository and start building the blocks for the docker image build. You should begin by specifying that it’s a docker-in-docker build and in the before_script part, you need to log in to the GitLab registry:

image: docker:stable

variables:
  DOCKER_DRIVER: overlay2

services:
- docker:dind

stages:
- build

before_script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com

The only stage that is needed is the build stage since that is the only thing we want to do in this repository. Adapt this for yourself if you want to embed the release process inside of your code repository (if you have some kind of .gitlab-ci.yml).

overlayfs supposedly (GitLab documentation says that) gives a better performance in docker-in-docker thus use it. We will not go into the results of benchmarks of different filesystems.

After, the image building action needs to be specified as part of the build stage in the .gitlab-ci.yml file. This is how I recommend you to do it:

build_image:
  only:
  - master
  stage: build
  script:
    - "docker build -t companyname ."
    - docker tag companyname registry.gitlab.com/companyname/coolproject/companyname:latest
    - docker push registry.gitlab.com/companyname/coolproject/companyname:latest

Replace companyname in this snippet with your own company name. Also, coolproject should be the name of your project in GitLab. Most people have named their main branch as master but change that again if it does not fit your case.

Furthermore, create branches named uat and release. They both will serve to represent the different parts of the release process outlined earlier. Perhaps in those branches you should leave a README.md file which tells how the release process works.

In those separate branches, you will have a slightly modified .gitlab-ci.yml file which will essentially just pull a Docker image, retag it, and upload it back to the registry.

In the uat branch, in the .gitlab-ci.yml file you should have something like this (the other parts are skipped for brevity):

variables:
  DOCKER_DRIVER: overlay2
  CURRENT_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:latest
  UAT_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:0.0.1-UAT

stages:
- retag

pull_latest_and_tag_uat:
  only:
  - uat
  stage: retag
  script:
    - docker pull $CURRENT_IMAGE
    - docker tag $CURRENT_IMAGE $UAT_IMAGE
    - docker push $UAT_IMAGE

As you can see, the newest development image is simply pulled, tagged, and uploaded back to the GitLab’s Docker registry.

Finally, in release the same essentially should be used except that this time we will tag $UAT_IMAGE with the tag 0.0.1:

variables:
  DOCKER_DRIVER: overlay2
  CURRENT_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:0.0.1-UAT
  RELEASE_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:0.0.1

stages:
- retag

pull_latest_and_tag_release:
  only:
  - release
  stage: retag
  script:
    - docker pull $CURRENT_IMAGE
    - docker tag $CURRENT_IMAGE $RELEASE_IMAGE
    - docker push $RELEASE_IMAGE

In the end, you will be left with a Docker image with a proper tag which went through the usual release phases: development, extensive testing, golden release. I also recommend locking the uat and release branches so that someone would not accidentally push something into those branches and overwrite your Docker images.

I hope this tutorial was useful. Happy hacking! Let me know if you run into any issues.

Terraform vSphere provider 1.x now supports deploying OVA files, makes using ovftool on ESXi hosts obsolete

You may currently have this pipe-line in your CI/CD process that involves running ovftool directly on your ESXi host so as to deploy an OVA appliance directly from it and with as less overhead as possible as in this tutorial on virtuallyGhetto or you could have picked up some tips about running ovftool on ESXi from this article that I wrote some time ago.

However, that is certainly not the best solution for many reasons. For example, if the input is not sanitized in some location(-s) then it becomes trivial to execute any command on the ESXi host. Thus, instead I recommend you to use a tool like Terraform to save your infrastructure as code in a Git repository, for example. With the 1.x version of the Terraform’s vSphere provider you can implement such pipe-line in Terraform as well.

It has its down-sides, though. For instance, it is impossible to explicitly express dependencies between module resources however I still think that tool is worth-while and future-proof. Even though it has its downsides, you should still move over to Terraform. I will show you how to do that.

Your pipe-line before might look something like this:

Instead, we will transform it to this:

The key idea is that instead of deploying the OVA via ovftool and specifying the properties up front, you should deploy it once without specifying anything and without turning the virtual machine on. Afterwards, convert it to a template and then you should clone that virtual machine and specify the properties. This way, after turning it on the first time, the fresh properties will be picked up. Also, because we will only deploy it once then there is no need to do it directly on the ESXi host every time. There are no time savings that we could win after the first import so let’s just ditch the idea of deploying directly from the ESXi host entirely. That’s the main change.

Deploying the OVA first time

To deploy the OVA the first time, you could use something like govc which is freely available instead of ovftool. Before downloading ovftool you are forced to accept certain terms and conditions, and create an VMware account which can become a nuisance so I recommend you to use other tools.

You could invoke govc like this:

GOVC_URL=user:pass@host govc import.ova -ds=datastore -folder=somefolder -host=host -name=template_from_ovf ./vmware-vcsa.ova

The -option parameter is not really useful in our case because we want to not specify any properties on purpose. Other parameter that you may find useful is -pool which specifies the resource pool to which the new virtual machine will be deployed to.

After deploying the OVA, convert it to a template as such:

GOVC_URL=user:pass@host govc vm.markastemplate template_from_ovf

Now it is ready for usage by the Terraform part of our pipe-line.

Terraform: provisioning the VMs

Since the 1.0.0 version of the vSphere Terraform provider, it supports specifying the vApp (OVA) properties. We will leverage this feature to specify them after cloning a new virtual machine from that template.  I’m not going to go over how to use Terraform in this article but I will provide an example of an resource vsphere_virtual_machine. It will do just what we need for this part of the pipe-line.

In main.tf (or whatever other file that ends in .tf. You could use different .tf files for different virtual machines) you need to have something more or less this:

data "vsphere_datacenter" "dc" {
  name = "dc1"
}

data "vsphere_datastore" "datastore" {
  name          = "datastore1"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_resource_pool" "pool" {
  name          = "cluster1/Resources"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "network" {
  name          = "public"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_virtual_machine" "tempate_from_ovf" {
  name          = "template_from_ovf"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

resource "vsphere_virtual_machine" "vm" {
  name             = "terraform-test"
  resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"

  num_cpus = 2
  memory   = 1024
  guest_id = "${data.vsphere_virtual_machine.template.guest_id}"

  scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"

  network_interface {
    network_id   = "${data.vsphere_network.network.id}"
    adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
  }

  disk {
    name             = "disk0"
    size             = "${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub    = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  clone {
    template_uuid = "${data.vsphere_virtual_machine.template_from_ovf.id}"
  }

  vapp {
    properties {
      "guestinfo.hostname"                        = "terraform-test.foobar.local"
      "guestinfo.interface.0.name"                = "ens192"
      "guestinfo.interface.0.ip.0.address"        = "10.0.0.100/24"
      "guestinfo.interface.0.route.0.gateway"     = "10.0.0.1"
      "guestinfo.interface.0.route.0.destination" = "0.0.0.0/0"
      "guestinfo.dns.server.0"                    = "10.0.0.10"
    }
  }
}

(Copied verbatim from here. You can find much more information there)

I will go over each section:

  • At first information is gathered from various data sources: vsphere_datacenter, vsphere_datastore, and so on.
  • After that, a new resource vsphere_virtual_machine is created with name “vm”. The VM itself will have the name “terraform-test”. Some other options are specified that you can find in the example. Everything looks like the same except for the vapp section. In there, you should specify the OVA properties. You could list all of them by using this command:
ovftool /the/file.ova

Also, the vSphere client can help you out with this if you will go to the File > Deploy OVF Template. Another option is to check out the manifest file inside of the OVA file. Just open it with 7zip or some other program which can read archive files.

  • Gather the properties via your chosen method and specify them in the vapp section.

To understand what other options mean you should consult the vsphere provider documentation in terraform. I will not copy and paste it here for brevity.

Afterwards, run terraform apply with your custom options (if applicable) and watch how terraform will pick up everything and show you the details of the actions that will be performed if you will enter yes. So, finally, just enter yes and watch how the virtual machine will be created.

Conclusion

This concludes this tutorial. Now in case you need to create more virtual machines from the same template, duplicate the vsphere_virtual_machine resource declaration and others (if needed). Run terraform apply again and it will create the other virtual machines. You might want to provide -auto-approve to terraform apply in your CD pipe-line – that way you will not have to enter yes all the time. Check out the other options here.

By using this method, your deployment pipe-line just became less intrusive, faster, and more flexible because  you can actually specify the configuration of the resulting VM whereas with the old method you were forced to be stuck with the VM configuration specified in the OVA file unless you had some post-provisioning scripts in place. Have fun!