programming is terriblelessons learned from a life wasted

HTTP-NG, or the ghost of HTTP2.0

The results of the last attempt to rebuild HTTP, aka HTTP “Next-Generation” or HTTP-NG make fun bedtime reading.

Let us go back more than a decade ago, when HTTP/1.1 was appearing. Lots of people were excited to make HTTP2, so they formed a group, called HTTP-NG. They set out to fix HTTP. Not to spoil the story, but you can probably guess what happened. They failed.

It started off sounding reasonable - let’s fix the transport issues, the parsing issues, and the tunneling issues. Let’s make it easy to multiplex over http, because routers suck—Let’s make http easier to parse, because it’s a bit of a clusterfuck—and work out how to tunnel things properly, because people are doing it anyway.

As a result, they decided to write a new transport layer, build a remote invocation framework for distributed objects, and then implement http on top of it. Simple!

The new transport layer was called MUX. As with many attempt to multiplex streams over tcp, they had to re-implement bits of tcp atop. (Notably, other unpopular protocols like BEEP continue in this great tradition). Simple!

The remote invocation protocol set out to unify ‘the type systems of COM, CORBA, and Java RMI’, on the basis 'this need not produce a terribly bloated result’. This now meant handling reference/value distinctions and distributed garbage collection. Simple!

Now they had unified the world of software, finally they could then implement HTTP really easily, atop this gigantic turd. Simple! If only it weren’t for those pesky “legacy” installations, we’d be living in paradise.

For some reason, implementing support for distributed objects, and multiplexing never really took off—Technical superiority hasn’t ever made a convincing argument for adoption. Especially when this superiority came at a high implementation cost for frankly little gain and no interoperability.

HTTP-NG attempted to solve the problem we had — how *should* http work — rather than the problem we face — how do we improve HTTP without breaking things.

Similar promises were made, with oauth to oauth2, html4 to xhtml, xmlrpc to soap, and rss to atom—Let’s make a new standard, but better this time. Similar consequences befell them too—Some failed to be adopted, others failed to dislodge their predecessor, and some just avoided making an interoperable standard.

The failure to replace a working system with a sophisticated one is so common, it has a name—The second system effect—although usually applied to products, rather than protocols.

For the current attempts at HTTP2, you can perhaps breath a sigh of relief. They can upgrade without breaking old code. They aren’t introducing distributed garbage collection. There is production code from more than one vendor.

It’s staring off so reasonably, what could go wrong?