Go’s http.Transport and 408 response code – what’s the relationship?

Intro

Golang’s standard library’s http package provides a type http.Transport which implements some low-level methods for transporting HTTP requests hence the name. It is very useful – usually other libraries want variables of types which implement that interface – however, you might have noticed that using it in combination with haproxy and HTTP keep-alive connections sometimes make these kinds of messages appear in your program:

2018/08/21 11:22:33 Unsolicited response received on idle HTTP channel starting with "HTTP/1.0 408 Request Time-out\r\nCache-Control: no-cache\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n"; err=<nil>

And in your haproxy logs:

<142>Aug 21 11:22:33 myhost[32584]: 1.2.3.4:44444 [21/Aug/2018:11:22:32.041] http~ http/<NOSRV> -1/-1/-1/-1/10050 408 212 - - cR-- 0/0/0/0/0 0/0 "<BADREQ>"

This blog post goes through the reasons why that happens, if it is harmful, and how you can avert these kinds of things.


If you might not have known, once you start using the net/http package, it disables the embedded albeit rudimentary Go’s dead-lock checker if keep-alive connections are used because it internally spawns new goroutines to keep track of them. Otherwise, that tracking would have to “bubble up” back to the caller. So by disabling it, harmless messages about all locked goroutines are avoided when the main goroutine is blocked as well due to performing some other actions.

To actually keep the HTTP connections alive, a certain liveness check is needed. These things are called HTTP probes. Those probes are periodical sends of bytes on HTTP sockets to see if the other end is “still alive”.

However, it could happen that our client which uses http.Transport still expects the HTTP probe to work if there is a mismatch in the keep-alive timeouts on both sides. On the client side, this is controlled by IdleConnTimeout which is specified in http.Transport.  On the haproxy end, it is controlled by timeout http-keep-alive <timeout> in the configuration file.

The error message mentioned in the first paragraph comes up when that certain HTTP probe is being sent but the server (in our case, haproxy) and at the same time, while the packets are on the wire, the connection has been already closed on the other end since it had timed out. Thus, to fix it, you need to have identical IdleConnTimeout on both ends.

Let’s also analyze the haproxy log messages which are being printed whenever this happens. The capital C letter means that the connection was closed from the haproxy side. Obviously, this had happened because the keep-alive connection reached the timeout value and then it was closed automatically. As you can see, the actual duration of the connection is very close to the actual timeout value – 10 seconds. Of course, it is not a real time system so a margin of error of a few milliseconds is OK.

The same thing is being reported by the error message in our Golang program: our “browser” or, in other words, a client using http.Transport did not send any request so that was sent to a background goroutine that kept track of the keep-alive connection, and because no actual request had been sent, it was marked as an error and that was printed to the console to inform the user what had happened.


How to fix this?

Make sure that the keep-alive connection timeout is lower on the client end i.e. the one that initiates the HTTP connections. On Golang programs this can be done by accordingly modifying the value of IdleConnTimeout in http.Transport so that it would be lower than on the server. Obviously, if you use some kind of other libraries for making (you should really use the stdlib one, though, unless you have some serious issues with it) HTTP requests then modify some kind of other option or field which controls the timeout.

Also, haproxy provides an option to ignore HTTP probes: option http-ignore-probes. However, it seems that it could make haproxy ignore some kind of other, legit errors. So, your mileage may vary. Use it with caution. I recommend the first option, if you can modify the program.


I wrote this since my proposals to include such documentation in minio-go were met with a negative response and I feel like this should be public knowledge so that others would know.

Are Scrum, Agile, and other iterative programming methodologies useful for niche programming specialties such as SREs, DevOps engineers, and so on?

Intro

Recently I heard that someone asked in an “IT Systems Engineer” interview a question: “Why are you using Agile in your team? Are not most of the tasks on your team ad-hoc?”. This made me think about this topic deeply. The work being done in these types of teams might seem distant from regular programming at the beginning. However, it is not because in general it is focused on automating stuff using software, avoiding manual labor, building reliable systems or, in other words, the product and your clients are just different – they are internal whereas usually, they are external. Let me try to explain.

Difference between sysadmins and SREs

Let’s first note the difference between those two – this is important because a lot of people have a wrong perception that everyone other than developers does not create any kind of software. Alas, the SRE/DevOps movement is mainly all about eliminating toil. For example, Google does not allow their SREs to spend more than 50% of their time doing manual stuff. That means that it should be done automatically by some kind of automation. On the other hand, the sysadmin model is about dividing the IT & development into two separate silos – the developers and the system administrators. Everyone is doing their own stuff and barely collaborating. Also, they are usually not creating much software of their own – mostly just small scripts with Perl or Bash. As you can see there is a stark difference in the mindsets and we are talking about the former one.

Iterative programming methodologies for SREs

Just like in the “normal” programming world, requirements change and it is an inevitable part of the whole gig, I think, unless you are making software for things where reliability is of utmost importance – for example, you would not want a plane to crash because of an overflow error (that still happens sometimes like in this case of Boeing Dreamliners).

The benefits and downsides of each programming methodology do not differ at all in any case. Let me propose an example. For instance, after a month your DNS service which you maintain might get a new requirement – the self-service should get a new batch creation feature. It would let create many records with only pressing a few buttons, saving the precious users’ time. This might not be apparent at the beginning.

This is where iterative programming techniques such as agile are useful because they solve this problem of uncertainty – they embrace it, and they let you pivot in the middle (between sprints) of your development process. With sequential development methodologies, you would have to wait until the end a whole cycle to implement any kind of new requirements.

Practically employing iterative programming in the day to day life of a site reliability engineer

For special, toil type of tasks, you could create one huge task in your time tracking software. For example, create a task in Jira, under which all of the other tasks would be created as sub-tasks where all of the nitty-gritty details would be written down and the time tracked.

Afterward, you could sum up the time spent on toil type of tasks versus the other ones. Then you could tell if you yourself or the engineers spend more or less than 50% of their time on this type of work which could subsequently become a signal that something is wrong.