How to solve puppetdb and SIGKILL problems

In this post, I wanted to share some knowledge that I have gained recently. Sometimes your Puppet server might start returning 500 HTTP status codes. You should check puppetserver‘s logs first of all, of course, but another thing to check is the puppetdb. puppetdb is the component which is responsible for storing Puppet-related data. In this case, there were memory-related problems on the puppetdb instance. The puppetdb logs in this case have looked like this:

Apr 02 15:25:19 hostname systemd[1]: Started puppetdb Service.
Apr 04 18:54:07 hostname systemd[1]: puppetdb.service: main process exited, code=killed, status=9/KILL
Apr 04 18:54:07 hostname systemd[1]: Unit puppetdb.service entered failed state.
Apr 04 18:54:07 hostname systemd[1]: puppetdb.service failed.
Apr 04 18:54:08 hostname systemd[1]: puppetdb.service holdoff time over, scheduling restart.
Apr 04 18:54:08 hostname systemd[1]: Stopped puppetdb Service.
Apr 04 18:54:08 hostname systemd[1]: Starting puppetdb Service...

The regular logs in /var/log/puppetlabs/puppetdb/puppetdb.log also did not show any particular problems on why it could have been killed by that signal. So, what could have been the issue?

It turns out that the Java’s virtual machine supports some interesting parameters. One of them is -XX:OnOutOfMemoryError. With that, you can specify what kind of command should be executed once the JVM runs out of memory. In this case, it was set to -XX:OnOutOfMemoryError=kill -9 %p. It means that SIGKILL (9) is sent to the JVM process if it runs out of memory.

A quick GitHub search shows that it is a pretty prevalent thing to do. However, the problem with it is that it is relatively trivial to do a denial-of-service attack because there is no graceful load shedding. It only takes sending a bunch of requests to it that would allocate some memory. That is it if there is no queuing mechanism in place. Also, it provides no way of knowing where all of the memory was allocated when it runs into such a situation. Plus, the state could get so bad that it might be impossible to spawn a new process that would send that signal. So, it should be avoided. But, hey, at least systemd conveniently shows that the process has been turned off with SIGKILL (9) in this scenario.

Obviously, when this happens then you should adjust the -Xmx and -Xms parameters on your JVM process to permit it to allocate more memory. Of course, it is only possible if there is enough memory available.

Looking into the past it seems that this has appeared in puppetdb with this commit. As far as I can tell, this has been added because old JVM versions (pre-2016) have not supported any other way of dealing with this situation besides adding a huge try/catch block & dealing with all of the possible situations or just crashing out. Plus, the exception could even appear in a separate thread that is not under our control so it requires a lot of extra effort for not much gain.

But, since 2016 new, better ways of dealing with this situation have been introduced. Consider using +XX:ExitOnOutOfMemoryError or +XX:CrashOnOutOfMemoryError as per these release notes if your JVM is new enough. They avoid some of the problems mentioned earlier such as being unable to start another process. It is worth mentioning that other users of that flag such as prestodb are slowly moving towards the new ones with commits such as this.

In general, it is probably best to enable options such as -XX:+HeapDumpOnOutOfMemoryError or -XX:HeapDumpPath if you have enough spare space on your servers where JVM processes are running.

This has also reminded me of a great article on LWN about crash-only software. Perhaps it is really easier to just crash and quickly restart if something goes wrong? But at the very least the software should print some diagnostic messages before destroying itself to help its users understand what is going on in situations like these.

/etc/nsswitch.conf and /etc/hosts woes with the Alpine (and others) Docker image and Golang

By default, the alpine Docker image which is mostly used for Golang programs does not contain /etc/nsswitch.conf. Golang’s net package used to do a sane thing when that file did not exist back in the day (before April 30, 2015) i.e. they checked the /etc/hosts file first, and then moved on to trying to query the DNS servers.

However, it was changed later – now Go’s resolver tries to query the local machine as per the manual page of nsswitch.conf(5). You can find the commit here.

Most distributions have a file /etc/nsswitch.conf installed by default which is set to use the files database for name resolution first:

hosts: files dns

Thus, the easiest way to actually fix it is to mount your own local, normalnsswitch.conf inside the Alpine container by passing this extra parameter to docker run like so: docker run -v /etc/nsswitch.conf:/etc/nsswitch.conf.

To further customize the name resolution, configure /etc/nsswitch.conf as per your needs.

Even forcing Go to use the cgo resolver wouldn’t help much in most of the cases because glibc (the most popular libc) follows the same exact steps in case /etc/nsswitch.conf does not exist.

What makes it more painful is that you have to restart your Go programs after adding /etc/nsswitch.confor making a change in it if you use the Go’s internal resolver because it does not watch for changes and it does not automatically reload what it has in memory. I guess that Go, again, follows the principle out-lined in the aforementioned manual:

Within each process that uses nsswitch.conf, the entire file is read only once.  If the file is later changed, the process will continue using the old configuration.

This affects a lot of publicly available container images that are based on Alpine. You could find a example list here.

Some of the other popular images had this issue too. For example, the Prometheus 2 months ago didn’t have that file too in their Docker image quay.io/prometheus/busybox. This was fixed here.

So, in any way, tread the GNU/Linux container world carefully if you are developing a Go program. You might run into some confusing behavior if /etc/nsswitch.conf does not exist. At least add a minimum one to have a proper name resolution.