Terraform vSphere provider 1.x now supports deploying OVA files, makes using ovftool on ESXi hosts obsolete

You may currently have this pipe-line in your CI/CD process that involves running ovftool directly on your ESXi host so as to deploy an OVA appliance directly from it and with as less overhead as possible as in this tutorial on virtuallyGhetto or you could have picked up some tips about running ovftool on ESXi from this article that I wrote some time ago.

However, that is certainly not the best solution for many reasons. For example, if the input is not sanitized in some location(-s) then it becomes trivial to execute any command on the ESXi host. Thus, instead I recommend you to use a tool like Terraform to save your infrastructure as code in a Git repository, for example. With the 1.x version of the Terraform’s vSphere provider you can implement such pipe-line in Terraform as well.

It has its down-sides, though. For instance, it is impossible to explicitly express dependencies between module resources however I still think that tool is worth-while and future-proof. Even though it has its downsides, you should still move over to Terraform. I will show you how to do that.

Your pipe-line before might look something like this:

Instead, we will transform it to this:

The key idea is that instead of deploying the OVA via ovftool and specifying the properties up front, you should deploy it once without specifying anything and without turning the virtual machine on. Afterwards, convert it to a template and then you should clone that virtual machine and specify the properties. This way, after turning it on the first time, the fresh properties will be picked up. Also, because we will only deploy it once then there is no need to do it directly on the ESXi host every time. There are no time savings that we could win after the first import so let’s just ditch the idea of deploying directly from the ESXi host entirely. That’s the main change.

Deploying the OVA first time

To deploy the OVA the first time, you could use something like govc which is freely available instead of ovftool. Before downloading ovftool you are forced to accept certain terms and conditions, and create an VMware account which can become a nuisance so I recommend you to use other tools.

You could invoke govc like this:

GOVC_URL=user:pass@host govc import.ova -ds=datastore -folder=somefolder -host=host -name=template_from_ovf ./vmware-vcsa.ova

The -option parameter is not really useful in our case because we want to not specify any properties on purpose. Other parameter that you may find useful is -pool which specifies the resource pool to which the new virtual machine will be deployed to.

After deploying the OVA, convert it to a template as such:

GOVC_URL=user:pass@host govc vm.markastemplate template_from_ovf

Now it is ready for usage by the Terraform part of our pipe-line.

Terraform: provisioning the VMs

Since the 1.0.0 version of the vSphere Terraform provider, it supports specifying the vApp (OVA) properties. We will leverage this feature to specify them after cloning a new virtual machine from that template.  I’m not going to go over how to use Terraform in this article but I will provide an example of an resource vsphere_virtual_machine. It will do just what we need for this part of the pipe-line.

In main.tf (or whatever other file that ends in .tf. You could use different .tf files for different virtual machines) you need to have something more or less this:

data "vsphere_datacenter" "dc" {
  name = "dc1"
}

data "vsphere_datastore" "datastore" {
  name          = "datastore1"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_resource_pool" "pool" {
  name          = "cluster1/Resources"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "network" {
  name          = "public"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_virtual_machine" "tempate_from_ovf" {
  name          = "template_from_ovf"
  datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

resource "vsphere_virtual_machine" "vm" {
  name             = "terraform-test"
  resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"

  num_cpus = 2
  memory   = 1024
  guest_id = "${data.vsphere_virtual_machine.template.guest_id}"

  scsi_type = "${data.vsphere_virtual_machine.template.scsi_type}"

  network_interface {
    network_id   = "${data.vsphere_network.network.id}"
    adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}"
  }

  disk {
    name             = "disk0"
    size             = "${data.vsphere_virtual_machine.template.disks.0.size}"
    eagerly_scrub    = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}"
    thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}"
  }

  clone {
    template_uuid = "${data.vsphere_virtual_machine.template_from_ovf.id}"
  }

  vapp {
    properties {
      "guestinfo.hostname"                        = "terraform-test.foobar.local"
      "guestinfo.interface.0.name"                = "ens192"
      "guestinfo.interface.0.ip.0.address"        = "10.0.0.100/24"
      "guestinfo.interface.0.route.0.gateway"     = "10.0.0.1"
      "guestinfo.interface.0.route.0.destination" = "0.0.0.0/0"
      "guestinfo.dns.server.0"                    = "10.0.0.10"
    }
  }
}

(Copied verbatim from here. You can find much more information there)

I will go over each section:

  • At first information is gathered from various data sources: vsphere_datacenter, vsphere_datastore, and so on.
  • After that, a new resource vsphere_virtual_machine is created with name “vm”. The VM itself will have the name “terraform-test”. Some other options are specified that you can find in the example. Everything looks like the same except for the vapp section. In there, you should specify the OVA properties. You could list all of them by using this command:
ovftool /the/file.ova

Also, the vSphere client can help you out with this if you will go to the File > Deploy OVF Template. Another option is to check out the manifest file inside of the OVA file. Just open it with 7zip or some other program which can read archive files.

  • Gather the properties via your chosen method and specify them in the vapp section.

To understand what other options mean you should consult the vsphere provider documentation in terraform. I will not copy and paste it here for brevity.

Afterwards, run terraform apply with your custom options (if applicable) and watch how terraform will pick up everything and show you the details of the actions that will be performed if you will enter yes. So, finally, just enter yes and watch how the virtual machine will be created.

Conclusion

This concludes this tutorial. Now in case you need to create more virtual machines from the same template, duplicate the vsphere_virtual_machine resource declaration and others (if needed). Run terraform apply again and it will create the other virtual machines. You might want to provide -auto-approve to terraform apply in your CD pipe-line – that way you will not have to enter yes all the time. Check out the other options here.

By using this method, your deployment pipe-line just became less intrusive, faster, and more flexible because  you can actually specify the configuration of the resulting VM whereas with the old method you were forced to be stuck with the VM configuration specified in the OVA file unless you had some post-provisioning scripts in place. Have fun!

Tutorial: Automatic, Reproducible Preparation of Virtual Machine templates for your vSphere environment with HashiCorp’s Packer

Introduction

Let us say that you are using an vSphere (VMWare vCenter + one or more ESXi servers) environment in your company and you are creating virtual machines pretty often. Obviously you are familiar with the virtual machines template functionality but where could you get the templates? Well, one of the options is to look for them somewhere on the internet but that is not a very reliable thing to do since you have to trust a third-party to provide these to you. Most, if not all, operating systems distributors do not provide a VMWare template for their operating system. For example, Canonical only provides Ubuntu in a form of an ISO file. Thus, you have to change your approach and make the templates yourself. However, doing that manually that is tedious and very menial so you will look for solutions to this problem. Packer is something that solves this problem.

From here on, I will assume some level of familiarity with Packer here. It has a builder called vmware-iso that looks like it can do the trick. However, you will run into problems quickly. For example, the vsphere-template post-processor only works if you run the vmware-iso builder remotely. It does not work with your local virtual machine. Because of this and other issues you have to apply some tricks to make the post-processing section work. Also, after using that builder once, you might notice that there are some short-comings in how Packer detects what IP the virtual machine has. It takes the first DHCP lease from ethernet0 and uses that to communicate with your virtual machine even though it is, by default, the network that is used for network address translation (NAT). This post will show you how to make it all work. By the end of it, you should have a working pipe-line:

  • automatically a new virtual machine is created in VMware Workstation with needed parameters
  • specified ISO is mounted and the OS is installed using a “pre-seeded” configuration
  • some optional post-processing scripts will be run on the virtual machine
  • the resulting virtual machine template will be uploaded to the specified vCenter

As you can see, this process is perfect for making golden templates of various operating systems. Let us delve into by using Ubuntu 16.04 as an example.

Step 1: starting up the virtual machine

The vmware-iso builder is perfect for doing except we will have to apply some tricks to make Packer pick up the correct IP address. By default, the first and only network is used for network address translation which means that when the virtual machine will be deployed, Packer will pick up the given DHCP lease and it will try to connect to it via SSH for provisioning. Because network address translation is happening, there is no way to reach that deployed virtual machine without any tricks.

To solve this problem, I recommend you to specify custom VMX properties which will make ethernet0 be connected to vmnet8, and ethernet1 to vmnet1vmnet8 is an internal, private network that will be used for communication between the host machine and the virtual machine. ethernet1 is connected to vmnet1, which is the network with an DHCP server and is used for network address translation which lets the virtual machine access the Internet. Such configuration of vmnet1 and vmnet8 is the default on VMWare Workstation so no changes are needed on that side.

This is achievable by specifying the following in the builders section of the JSON configuration file:

{
    "vmx_data": {
        "ethernet0.present": true,
        "ethernet0.startConnected": true,
        "ethernet0.connectionType": "custom",
        "ethernet0.vnet": "vmnet8",
        "ethernet1.present": true,
        "ethernet1.startConnected": true,
        "ethernet1.connectionType": "custom",
        "ethernet1.vnet": "vmnet1"
    }
}

You can find more information about VMX properties in this website: http://sanbarrow.com/vmx/vmx-network-advanced.html.

However, after provisioning the virtual machine, you might want to disable the second interface because it is a good default to only have one virtual NIC connected to a virtual machine template by default. If the user will want to add more networks and NICs – it is up to them. So to disable the second NIC after everything, add this to the configuration file that is passed to Packer:

{
    "vmx_data_post": {
        "floppy0.present": false,
        "ethernet1.present": false
    }
}

Note that this also disables the virtual floppy disk after provisioning. This is needed to unmount the floppy disk that has the pre-seed file. Obviously it is not needed in the final virtual machine template. This is all you need so far in this tutorial. You could run the file now and see the Ubuntu installer be automatically started. This is how my builders section looks like:

{
  "builders": [
    {
      "type": "vmware-iso",
      "iso_url": "http://releases.ubuntu.com/16.04/ubuntu-16.04.3-server-amd64.iso",
      "iso_checksum": "10fcd20619dce11fe094e960c85ba4a9",
      "iso_checksum_type": "md5",
      "ssh_username": "root",
      "ssh_password": "Giedrius",
      "vm_name": "Ubuntu_16.04_x64",
      "ssh_wait_timeout": "600s",
      "shutdown_command": "shutdown -P now",
      "output_directory": "ubuntu_1604",
      "boot_command": [
        "<enter><wait><f6><esc><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        "/install/vmlinuz<wait>",
        " auto<wait>",
        " console-setup/ask_detect=false<wait>",
        " console-setup/layoutcode=us<wait>",
        " console-setup/modelcode=pc105<wait>",
        " debconf/frontend=noninteractive<wait>",
        " debian-installer=en_US<wait>",
        " fb=false<wait>",
        " hostname=ubuntu1604<wait>",
        " initrd=/install/initrd.gz<wait>",
        " kbd-chooser/method=us<wait>",
        " keyboard-configuration/layout=USA<wait>",
        " keyboard-configuration/variant=USA<wait>",
        " locale=en_US<wait>",
        " noapic<wait>",
        " preseed/file=/floppy/ubuntu_preseed.cfg",
        " -- <wait>",
        "<enter><wait>"
      ],
      "guest_os_type": "ubuntu-64",
      "floppy_files": [
        "./ubuntu_preseed.cfg"
      ],
      "vmx_data": {
        "ethernet0.present": true,
        "ethernet0.startConnected": true,
        "ethernet0.connectionType": "custom",
        "ethernet0.vnet": "vmnet8",
        "ethernet1.present": true,
        "ethernet1.startConnected": true,
        "ethernet1.connectionType": "custom",
        "ethernet1.vnet": "vmnet1"
      },
      "vmx_data_post": {
        "floppy0.present": false,
        "ethernet1.present": false
      }
    }
  ]}

Step 2: installing the operating system in the new virtual machine

There is almost no trick that has to be done here except that you have to make sure that DHCP is enabled on all network interfaces. Many operating systems’ installers only let you choose the default interface on which DHCP is enabled. This could be worked around with some custom amendments to the configuration.

For example, on Ubuntu 16.04 the easiest way to do this is to append some text to /etc/network/interfaces which will enable DHCP on the second interface as well. You can find more information on that file here: https://help.ubuntu.com/lts/serverguide/network-configuration.html. So, you should have something like this in your pre-seed file:

d-i preseed/late_command string in-target sh -c 'echo "auto ens33\niface ens33 inet dhcp" >> /etc/network/interfaces'

ens33 is the default name of the interface that it given to the second one on my set up so it is used here. As you can see, the network configuration is changed and DHCP client is enabled on the ens33 interface as well. If you are using some other operating system then refer to its manual to find out what exactly you have to change to enable the DHCP client on the second interface.

You are free to add anything you want to to the provisioners section of the Packer configuration. In my case, I am uploading some default configuration to /etc/profile.d and I am making those files executable so that they would be executed each time someone logs in.

Step 3: post processing a.k.a. uploading the actual template

This is an important trick that you have to do. The default post-processor only support this action if you use the vmware-iso builder in a remote configuration so some trickery is needed.

At first, you need to run the OVFTool to convert the *.vmx file to an *.ovf. Use the shell-local post-processor like this:

{
  "type": "shell-local",
  "inline": [
    "/c/Program\\ Files\\ \\(x86\\)/VMware/VMware\\\nWorkstation/OVFTool/ovftool.exe ubuntu_1604/Ubuntu_16.04_x64.vmx ubuntu_1604/Ubuntu_16.04_x64.ovf"
  ]
}

This will convert the resultant vmx file to an ovf one. Next, you have to use the artifice post-processor to change the list of artifacts so that the next post-processor that we are going to use would pick up the new ovf file as well.

The post-processor that we will use is called vsphere-template. It sifts through the artifact list (that we will generate with artifice) and uploads the OVF file to the vCenter, and converts it into an template. You can find it here: https://github.com/andrewstucki/packer-post-processor-vsphere-template (props to andrewstucki!). You can get all of the details in there. To make it short:

  • download the executable file into some known location
  • edit %APPDATA%/packer.config and add something like this there (change the exact path depending on your set-up):
{
  "post-processors": {
    "vsphere-template": "C:\\packer-post-processor-vsphere-template_darwin_amd64.exe"
  }
}

For the final piece, you will have to create a post-processor chain with vsphere-template and artifice like it is described here: https://www.packer.io/docs/post-processors/artifice.html. You will need to enclose the last two post-processors in square brackets. The final version of the post-processor section looks like this:

{
  "post-processors": [
    {
      "type": "shell-local",
      "inline": [
        "/c/Program\\ Files\\ \\(x86\\)/VMware/VMware\\ Workstation/OVFTool/ovftool.exe ubuntu_1604/Ubuntu_16.04_x64.vmx ubuntu_1604/Ubuntu_16.04_x64.ovf"
      ]
    },
    [
      {
        "type": "artifice",
        "files": [
          "ubuntu_1604\\*"
        ]
      },
      {
        "type": "vsphere-template",
        "datacenter": "example_datacenter",
        "datastore": "example_datastore",
        "host": "example_host",
        "password": "example_password",
        "username": "example_username",
        "vm_name": "Ubuntu_16.04_x64",
        "resource_pool": "example_resourcepool",
        "folder": "example_folder"
      }
    ]
  ]
}

Links

The Packer JSON configuration file looks like this: https://gist.github.com/GiedriusS/04b9881595882fdee61a83d3c7dd3f3b

The pre-seed file that I used for Ubuntu 16.04: https://gist.github.com/GiedriusS/9053014e39ad17a7e4669e28bc7494bc

For other machines there is not much difference except you will have to figure out what boot options to use, the format of the installer configuration, and what to pass to the network management daemon that is used in the operating system to enable DHCP on all interfaces.