Do not forget to open 443 port while using ovftool on ESXi hosts and deploying to a vCenter

[Message from the author: are you, dear reader, looking for someone with Python, C, Go, Linux, DevOps expertise? Look no further! I am always looking for new remote work opportunities and part-time freelancing gigs. If you are interested, contact me: giedriuswork (at) gmail (dot) com]

There is a pretty neat tool called ovftool that makes it easy to deploy an OVA/OVF from any source or extract the contents of an OVA. Ordinarily people would run it on their own machines but it is possible to run it on an ESXi host as well as per these articles: http://www.virtuallyghetto.com/2012/05/how-to-deploy-ovfova-in-esxi-shell.html, https://gist.github.com/ruzickap/fda71997083a41b0c6cd. However, if you tried that and tried to deploy the OVA on another vCenter from an ESXi machine you would have ran into a similar error as per this fictional example:

[root@foo:~] /vmfs/volumes/bar/OVFTool/vmware-ovftool/ovftool /vmfs/volumes/bar/qux.ova vi://foo:bar@baz.local/ 
Opening OVA source: /vmfs/volumes/ ...
The manifest validates
...
Error: Internal error: Failed to connect to server
Completed with errors

This happens because the vi:// protocol by default uses port 443 (a.k.a. HTTPS) and that is usually blocked on the ESXi side. To rectify this issue, you will have to click on your host and go here through the vSphere client:

Click on your host in vSphere, go to

Then click on

And check this box, and click ok:

Firewall properties menu

Then it should work. Try again to run the same command. Do not forget to uncheck this box when you are finished with your deployment because obviously you do not want to open ports on your precious ESXi hosts which are not used. Alternatively, you could SSH to the host and run these commands:

esxcli network firewall ruleset set --ruleset-id esxupdate --enable true

To enable port 443 which is needed for this operation. And to disable after you are done:

esxcli network firewall ruleset set --ruleset-id esxupdate --enable false

If the ruleset with name “esxupdate” does not exist then you could create your own for the 443 port and then obviously change the name “esxupdate” to your own in these examples and pictures.

Short tale about proxies and /etc/hosts on GNU/Linux

[Message from the author: are you, dear reader, looking for someone with Python, C, Go, Linux, DevOps expertise? Look no further! I am always looking for new remote work opportunities and part-time freelancing gigs. If you are interested, contact me: giedriuswork (at) gmail (dot) com]

I encountered an interesting behavior the other day when trying to use HTTP and/or HTTPS proxy on a GNU/Linux machine. cURL and friends read environment variables called http_proxy and https_proxy. So, you are thinking, if one wants to have “global” proxy settings, one just has to create a new executable file that is only writable by root in /etc/profile.d with content like this:

#!/bin/sh
export http_proxy=”http://1.2.3.4”
export https_proxy=”$http_proxy”

However, you might just notice an confusing occurrence when trying to use hostnames like localhost to refer to your own machine. Unfortunately, the priority is given to the proxy setting regardless if the hostname is static (i.e. it is in /etc/hosts) and it resolves to a local IP like 127.0.0.1. It could happen that your proxy will be a bit stupid and resolve localhost to its own (or even a different) IP, and then you will get burned because you want to refer to your computer with it, not the proxy IP. It has potential to especially be confusing if the output will be very similar to what you are expecting. This can happen when e.g. you are trying to set up a simple web server on your machine and the proxy responds with a place holder page. I can confess that I fell into this trap but I will not anymore.

To solve this you have to leverage another environment variable called no_proxy. It contains a list of hosts that will not go through the proxy. So, you have to append another setting to the /etc/profile.d/proxy.sh file that you created before. In the end, it will look something like this:

#!/bin/sh
export http_proxy=”http://1.2.3.4”
export https_proxy=”$http_proxy”
export no_proxy=”localhost”

Even though this solves our problem but on the other hand it has a lot of caveats as well:

  • You have to list all of the hostnames that you do not want to use the proxy for and that can become susceptible to human errors when the list becomes big. This can partially solved by using a configuration management system like Puppet so that you would be sure that you made no mistakes.
  • You have to maintain /etc/hosts in tandem with /etc/profile.d/proxy.sh to make sure that the same entries are in both files. Again, configuration management tools like Puppet or Chef can solve this easily but it is error prone when either of the files are modified by a human being so it is not an excuse. And we saw how even big companies like “Amazon” sometimes mess this up.

In conclusion, you have to be careful with /etc/hosts and proxies on GNU/Linux, especially with localhost. Do not forget to at least minimally add it to the no_proxy environment variable if you are going to set a “global” proxy setting. As far as I know, there is no way to easily change this behavior of cURL and friends so that it would look up /etc/hosts first before sending a request to the proxy. Personally, I think some kind of option should be provided to change this default behavior to alleviate the negative aspects that I mentioned before.