Things Learned From Trying to Migrate To Protobuf V2 API from gogoprotobuf (So Far)

During the most recent LFX mentorship program’s iteration, I had the honor to work on trying to migrate to version 2 of the protobuf API from gogoprotobuf on the Thanos project with my one and only awesome mentee Rahul Sawra and another mentor Lucas Serven who is also a co-maintainer of Thanos. I wanted to share my technical learnings from this project.

LFX mentorship program’s logo

First of all, let’s quickly look at what protocol buffers are and what is the meaning of the different words in the jargon. Protocol buffers are a way of serializing data. It was first made by Google. It is a quite popular library that is used by Thanos really everywhere. Thanos also uses gRPC to talk between different components. gRPC is a remote procedure call framework. With it, your (micro-)services can implement methods that could be called by others.

Since both were made by Google originally, it is not surprising that gRPC is most commonly used with protocol buffers even though there is no critical dependency between them.

gogoprotobuf is a fork of the original protocol buffers compiler for Go that has (had?) some improvements over the old one. However, it comes not without some downsides. We’ve accumulated random hacks overtime to make generated code compile and work. For example, we edit import statements with sed. This looks like an opportunity for improving code generation tools – perhaps more checks are needed? What’s more, it turns the whole code generation into a “house of cards” – remove one small part and the whole thing crumbles. But, on the other hand, it is not surprising that an unmaintained tool has a bug here and there.

Thanos started using gogoprotobuf at some time in the past. But, after some time it became unmaintained. At some point, the fine Vitess folk came up with their own protocol buffers compiler for the V2 API which has some nice optimizations that bring it up to par with the old gogoprotobuf performance. In addition, it has support for pooling memory on the unmarshaling side i.e. the receiver. The sender’s side still, unfortunately, cannot use pooling because gRPC SendMsg() method returns before the message gets on the wire. I feel like it’s a serious performance issue and I’m surprised that the gRPC developers still haven’t fixed this problem. This is the first learning that I wanted to share.

Another thing is about copying generated code. Sometimes the generated code is not perfect. So, the easiest and most straightforward way to fix this issue is to copy the generated code, change the parts that you don’t like, and commit it to Git. However, that is certainly far from perfect. We have made this mistake in the Thanos project. We’ve copied a generated struct and its methods to another file, and added our optimization. We call it the ZLabelSet. Here is its code. As you can see, it is an optimization to avoid an allocation. However, in this way, the struct members of generated code became deeply coupled with the rest of this custom code. Now it becomes much more painful to change the types of those members which kind of became an interface – this is because the v2 API does not support non-nullable struct members.

On the other hand, using interfaces in Go incurs extra performance costs so don’t try to optimize too heavily. Profile and always pick your battles.

This is the second lesson. Please try to not copy generated code and instead make your own protocol buffers compiler plugin or something. It is actually quite easy to do so.

Last but not least, I also wanted to talk about goals and focus. Ever since we’ve divided the whole project into as many small parts as possible, the main focus was on getting the existing test suite to pass successfully. However, that is not always the best idea. We ran into a problem where gogoprotobuf has an extension to use a more natural type for Go programmers in structs – time.Time, alas the same extension doesn’t exist in vanilla protocol buffers for Go. It has its own separate type – protobuf.Timestamp. Because the usage of timestamps is littered all over the Thanos codebase, we’ve run into a problem where we’ve accidentally defined a bunch of conversion functions between those two types. And they weren’t identical. So, we had to take a step back and look at the invariants. To be more exact time.Time defines an absolute time whereas protobuf.Timestamp stores the time passed since Unix epoch 0. Only after unifying the conversion functions, does everything work correctly. Keep in mind that those “small” parts of this project are thousand of lines added or removed so it’s really easy to get lost. For example, this is one pull request that got merged:

Screenshot of https://github.com/rahulii/thanos/pull/1 showing the diffstat

In conclusion, the third, more general learning is that sometimes it is better to take a step back and to look at how everything should work together instead of being fixated on one small part.

Perhaps in the future code generation will be replaced in some part by generics in Go 1.18 and future Go versions. That should make life easier. I also hope that we will pick up this work again soon and that I will be able to announce to everyone that we finally switched to the upstream version of the protocol buffers for Go. It seems like there is an appetite for that in our community so the future looks bright. We’ve already removed the gogoproto extensions from our .proto files and we are in the middle of removing the gogoproto compiler – https://github.com/rahulii/thanos/pull/2. Just need someone to finish all of this up. And to start using the pooling functionality in Thanos Query. Will it be you who will help us finish this work? 😍🍻

Surprising Behavior With inotify And ioutil

How using ioutil.WriteFile() with inotify in tests might make them flaky

Recently I was working on trying to fix one flaky test of the “reloader” component in the Thanos project. It was quite a long-standing one – almost took a whole year to fix this issue. It is not surprising as it is quite tricky. But, before zooming into the details, let’s talk about what does this test does and what other systems come into play.

Simply put, Thanos Sidecar works as a sidecar component for Prometheus that not just proxies the requests to it, captures the blocks produced by Prometheus & uploads them to remote object storage, but the Sidecar can also automatically reload your Prometheus instance if some kind of configuration files change. For that, it uses the inotify mechanism on the Linux kernel. You can read more about inotify itself here. Long story short, using it you can watch some files and get notifications when something changes e.g. new data gets written to the files.

The test in question is testing that reloader component. It is testing whether it sends those “reload” HTTP requests successfully because of certain simulated events and whether it properly retries failed requests. It had emulated changed files with the ioutil.WriteFile() call before the fix. However, during the tests, it sometimes had happened that the number of gotten HTTP calls versus what is expected did not match. After that, I looked at the events that the watcher had gotten via inotify and, surprisingly enough, sometimes some writes were missing or there were duplicates of them. Here is how it had looked like during these two different runs:

"/tmp/TestReloader_DirectoriesApply249640168/001/rule2.yaml": CREATE
"/tmp/TestReloader_DirectoriesApply249640168/001/rule2.yaml": WRITE
"/tmp/TestReloader_DirectoriesApply249640168/001/rule1.yaml": WRITE
"/tmp/TestReloader_DirectoriesApply249640168/001/rule1.yaml": WRITE
"/tmp/TestReloader_DirectoriesApply249640168/001/rule3.yaml": CREATE
"/tmp/TestReloader_DirectoriesApply249640168/001/rule3.yaml": CREATE
"/tmp/TestReloader_DirectoriesApply249640168/001/rule-dir/rule4.yaml": WRITE
"/tmp/TestReloader_DirectoriesApply249640168/001/rule-dir/rule4.yaml": WRITE

"/tmp/TestReloader_DirectoriesApply364923838/001/rule2.yaml": CREATE
"/tmp/TestReloader_DirectoriesApply364923838/001/rule2.yaml": WRITE
"/tmp/TestReloader_DirectoriesApply364923838/001/rule1.yaml": WRITE
"/tmp/TestReloader_DirectoriesApply364923838/001/rule1.yaml": WRITE
"/tmp/TestReloader_DirectoriesApply364923838/001/rule3.yaml": CREATE
"/tmp/TestReloader_DirectoriesApply364923838/001/rule3.yaml": CREATE
"/tmp/TestReloader_DirectoriesApply364923838/001/rule-dir/rule4.yaml": WRITE

You can see that one time there were two writes, the other time – only one. Apparently, inotify is permitted to coalesce two or more write events into one if they happen consecutively “very fast”:

If successive output inotify events produced on the inotify file descriptor are identical (same wd, mask, cookie, and name), then they are coalesced into a single event if the older event has not yet been read (but see BUGS). This reduces the amount of kernel memory required for the event queue, but also means that an application can’t use inotify to reliably count file events.

https://man7.org/linux/man-pages/man7/inotify.7.html

Then, I started looking into the ioutil.WriteFile() function’s code because that’s what we had been using to do writes. And inside of it, I have found this:

f, err := OpenFile(name, O_WRONLY|O_CREATE|O_TRUNC, perm)
if err != nil {
	return err
}
_, err = f.Write(data)

if err1 := f.Close(); err1 != nil && err == nil {
	err = err1
}

return err

And this is where the surprising behavior comes from – opening a file with O_TRUNC also counts as a write:

IN_MODIFY (+)
          File was modified (e.g., write(2), truncate(2)).

Now this explains everything – due to the usage of O_TRUNC and then writing afterward, ioutil.WriteFile() can either generate one or two inotify events to the watcher depending on how fast it can read them. It is easy to avoid this issue – one simple way is to create a temporary file with ioutil.TempDir() and ioutil.TempFile(), and then move it into place with os.Rename.