Are Scrum, Agile, and other iterative programming methodologies useful for niche programming specialties such as SREs, DevOps engineers, and so on?

Intro

Recently I heard that someone asked in an “IT Systems Engineer” interview a question: “Why are you using Agile in your team? Are not most of the tasks on your team ad-hoc?”. This made me think about this topic deeply. The work being done in these types of teams might seem distant from regular programming at the beginning. However, it is not because in general it is focused on automating stuff using software, avoiding manual labor, building reliable systems or, in other words, the product and your clients are just different – they are internal whereas usually, they are external. Let me try to explain.

Difference between sysadmins and SREs

Let’s first note the difference between those two – this is important because a lot of people have a wrong perception that everyone other than developers does not create any kind of software. Alas, the SRE/DevOps movement is mainly all about eliminating toil. For example, Google does not allow their SREs to spend more than 50% of their time doing manual stuff. That means that it should be done automatically by some kind of automation. On the other hand, the sysadmin model is about dividing the IT & development into two separate silos – the developers and the system administrators. Everyone is doing their own stuff and barely collaborating. Also, they are usually not creating much software of their own – mostly just small scripts with Perl or Bash. As you can see there is a stark difference in the mindsets and we are talking about the former one.

Iterative programming methodologies for SREs

Just like in the “normal” programming world, requirements change and it is an inevitable part of the whole gig, I think, unless you are making software for things where reliability is of utmost importance – for example, you would not want a plane to crash because of an overflow error (that still happens sometimes like in this case of Boeing Dreamliners).

The benefits and downsides of each programming methodology do not differ at all in any case. Let me propose an example. For instance, after a month your DNS service which you maintain might get a new requirement – the self-service should get a new batch creation feature. It would let create many records with only pressing a few buttons, saving the precious users’ time. This might not be apparent at the beginning.

This is where iterative programming techniques such as agile are useful because they solve this problem of uncertainty – they embrace it, and they let you pivot in the middle (between sprints) of your development process. With sequential development methodologies, you would have to wait until the end a whole cycle to implement any kind of new requirements.

Practically employing iterative programming in the day to day life of a site reliability engineer

For special, toil type of tasks, you could create one huge task in your time tracking software. For example, create a task in Jira, under which all of the other tasks would be created as sub-tasks where all of the nitty-gritty details would be written down and the time tracked.

Afterward, you could sum up the time spent on toil type of tasks versus the other ones. Then you could tell if you yourself or the engineers spend more or less than 50% of their time on this type of work which could subsequently become a signal that something is wrong.

Tutorial: Poor Man’s Release/Shipping Process on GitLab

Image result for gitlab

Since the recent acquisition of GitHub by Microsoft, a lot of people and companies are migrating to GitLab. The GitLab offering is especially attractive to early-stage start-ups since they offer free 10000 CI minutes or, in other words, 10000 minutes on their CI servers. It is easy to move over the workload to your own Kubernetes cluster once you run out of them – there is an easy-to-use user interface available where you could do that with just a few clicks.

Because of that, some kind of way is needed by companies to implement the release process on GitLab. Obviously, there are a lot of great solutions out there. However, let me present the process that I thought of. Besides the primary requirements of all releases processes, this one also takes minimal resources in terms of money and time because it is implemented directly using the GitLab CI functionality.

Are you interested? Let’s begin.

The overarching idea of this release process is to use the GitLab’s embedded Docker registry and the tagging functionality. Differ

Image result for containers

ent branches are going to be used for different stages of the release process – development images, unit under test (UAT) images, and the final images. Those different branches will also contain information about what kind of binaries to pull into the resulting Docker image. After a certain number of iterations, the development image is promoted to the UAT image. After some testing by, for example, QA engineers and housekeeping (e.g. some documentation needs to be updated or clients informed), they will finally be released into the wild by retagging the image with the final version tag.

Let’s drill down into the finer details.

All GitLab repositories at this moment have a free, embedded Docker registry enabled. I recommend using a separate repository for this release process – maybe you do not want to always release a new development image once new code gets pushed. Also, you might accidentally one day conflate the branches used for the release process and the development. Obviously, you could “protect” them but still, I don’t think it is worth it.

First of all, you need a Dockerfile that will be used to build the image. This guide will not talk about it, it’s up to you to figure it out. My only recommendation is to think about it as if that image is going to be released to your users i.e. it has to have everything.
Afterward, you should create a .gitlab-ci.yml file in the root of your repository and start building the blocks for the docker image build. You should begin by specifying that it’s a docker-in-docker build and in the before_script part, you need to log in to the GitLab registry:

image: docker:stable

variables:
  DOCKER_DRIVER: overlay2

services:
- docker:dind

stages:
- build

before_script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com

The only stage that is needed is the build stage since that is the only thing we want to do in this repository. Adapt this for yourself if you want to embed the release process inside of your code repository (if you have some kind of .gitlab-ci.yml).

overlayfs supposedly (GitLab documentation says that) gives a better performance in docker-in-docker thus use it. We will not go into the results of benchmarks of different filesystems.

After, the image building action needs to be specified as part of the build stage in the .gitlab-ci.yml file. This is how I recommend you to do it:

build_image:
  only:
  - master
  stage: build
  script:
    - "docker build -t companyname ."
    - docker tag companyname registry.gitlab.com/companyname/coolproject/companyname:latest
    - docker push registry.gitlab.com/companyname/coolproject/companyname:latest

Replace companyname in this snippet with your own company name. Also, coolproject should be the name of your project in GitLab. Most people have named their main branch as master but change that again if it does not fit your case.

Furthermore, create branches named uat and release. They both will serve to represent the different parts of the release process outlined earlier. Perhaps in those branches you should leave a README.md file which tells how the release process works.

In those separate branches, you will have a slightly modified .gitlab-ci.yml file which will essentially just pull a Docker image, retag it, and upload it back to the registry.

In the uat branch, in the .gitlab-ci.yml file you should have something like this (the other parts are skipped for brevity):

variables:
  DOCKER_DRIVER: overlay2
  CURRENT_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:latest
  UAT_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:0.0.1-UAT

stages:
- retag

pull_latest_and_tag_uat:
  only:
  - uat
  stage: retag
  script:
    - docker pull $CURRENT_IMAGE
    - docker tag $CURRENT_IMAGE $UAT_IMAGE
    - docker push $UAT_IMAGE

As you can see, the newest development image is simply pulled, tagged, and uploaded back to the GitLab’s Docker registry.

Finally, in release the same essentially should be used except that this time we will tag $UAT_IMAGE with the tag 0.0.1:

variables:
  DOCKER_DRIVER: overlay2
  CURRENT_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:0.0.1-UAT
  RELEASE_IMAGE: registry.gitlab.com/companyname/coolproject/companyname:0.0.1

stages:
- retag

pull_latest_and_tag_release:
  only:
  - release
  stage: retag
  script:
    - docker pull $CURRENT_IMAGE
    - docker tag $CURRENT_IMAGE $RELEASE_IMAGE
    - docker push $RELEASE_IMAGE

In the end, you will be left with a Docker image with a proper tag which went through the usual release phases: development, extensive testing, golden release. I also recommend locking the uat and release branches so that someone would not accidentally push something into those branches and overwrite your Docker images.

I hope this tutorial was useful. Happy hacking! Let me know if you run into any issues.